2026-03-09T18:15:17.325 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-09T18:15:17.333 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T18:15:17.360 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/602 branch: squid description: orch/cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/connectivity} email: null first_in_suite: false flavor: default job_id: '602' last_in_suite: false machine_type: vps name: kyr-2026-03-09_11:23:05-orch-squid-none-default-vps no_nested_subset: false os_type: ubuntu os_version: '22.04' overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: global: mon election default strategy: 3 mgr: debug mgr: 20 debug ms: 1 mgr/cephadm/use_agent: false mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - CEPHADM_STRAY_DAEMON - CEPHADM_FAILED_DAEMON - CEPHADM_AGENT_DOWN log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath workunit: branch: tt-squid sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - mon.a - mon.c - mgr.y - osd.0 - osd.1 - osd.2 - osd.3 - client.0 - node-exporter.a - alertmanager.a - - mon.b - mgr.x - osd.4 - osd.5 - osd.6 - osd.7 - client.1 - prometheus.a - grafana.a - node-exporter.b seed: 3443 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 targets: vm00.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOnC270SwkRq77PwhND1+gtY340lrp7TWIE75KRrVsJEWdbkYnhusGHffK2D8BZ2wmwi0ek2WxvFRNqMoCoi050= vm08.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG4KxtV8IR3AH1bo1ccp1YlpE7xCeSStAw22griJZcLkZzXaB8gdmXPVyrXj/15Se7kKytT3BZjT2q9K9SjB0Wg= tasks: - cephadm: cephadm_branch: v17.2.0 cephadm_git_url: https://github.com/ceph/ceph image: quay.io/ceph/ceph:v17.2.0 - cephadm.shell: env: - sha1 mon.a: - radosgw-admin realm create --rgw-realm=r --default - radosgw-admin zonegroup create --rgw-zonegroup=default --master --default - radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=z --master --default - radosgw-admin period update --rgw-realm=r --commit - ceph orch apply rgw foo --realm r --zone z --placement=2 --port=8000 - ceph osd pool create foo - rbd pool init foo - ceph orch apply iscsi foo u p - sleep 180 - ceph config set mon mon_warn_on_insecure_global_id_reclaim false --force - ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false --force - ceph config set global log_to_journald false --force - ceph orch ps - ceph versions - ceph -s - ceph orch ls - ceph orch daemon redeploy "mgr.$(ceph mgr dump -f json | jq .standbys | jq .[] | jq -r .name)" --image quay.ceph.io/ceph-ci/ceph:$sha1 - ceph orch ps --refresh - sleep 180 - ceph orch ps - ceph versions - ceph -s - ceph health detail - ceph versions | jq -e '.mgr | length == 2' - ceph mgr fail - sleep 180 - ceph orch daemon redeploy "mgr.$(ceph mgr dump -f json | jq .standbys | jq .[] | jq -r .name)" --image quay.ceph.io/ceph-ci/ceph:$sha1 - ceph orch ps --refresh - sleep 180 - ceph orch ps - ceph versions - ceph health detail - ceph -s - ceph mgr fail - sleep 180 - ceph orch ps - ceph versions - ceph -s - ceph health detail - ceph versions | jq -e '.mgr | length == 1' - ceph mgr fail - sleep 180 - ceph orch ps - ceph orch ls - ceph versions - ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mgr - while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done - ceph versions | jq -e '.mgr | length == 1' - ceph versions | jq -e '.mgr | keys' | grep $sha1 - ceph versions | jq -e '.overall | length == 2' - ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e '.up_to_date | length == 2' - ceph orch upgrade status - ceph health detail - ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mon --hosts $(ceph orch ps | grep mgr.x | awk '{print $2}') - while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done - ceph orch ps - ceph versions | jq -e '.mon | length == 2' - ceph orch upgrade status - ceph health detail - ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mon --hosts $(ceph orch ps | grep mgr.y | awk '{print $2}') - while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done - ceph orch ps - ceph versions | jq -e '.mon | length == 1' - ceph versions | jq -e '.mon | keys' | grep $sha1 - ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e '.up_to_date | length == 5' - ceph orch upgrade status - ceph health detail - ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types osd --limit 2 - while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done - ceph orch ps - ceph versions | jq -e '.osd | length == 2' - ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e '.up_to_date | length == 7' - ceph orch upgrade status - ceph health detail - ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types crash,osd --limit 1 - while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done - ceph orch ps - ceph versions | jq -e '.osd | length == 2' - ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e '.up_to_date | length == 8' - ceph orch upgrade status - ceph health detail - ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types crash,osd - while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done - ceph orch ps - ceph versions | jq -e '.osd | length == 1' - ceph versions | jq -e '.osd | keys' | grep $sha1 - ceph orch upgrade status - ceph health detail - ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --services rgw.foo - while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done - ceph orch ps - ceph versions | jq -e '.rgw | length == 1' - ceph versions | jq -e '.rgw | keys' | grep $sha1 - ceph orch upgrade status - ceph health detail - ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 - cephadm.shell: env: - sha1 mon.a: - while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; ceph health detail ; sleep 30 ; done - ceph orch ps - ceph versions - echo "wait for servicemap items w/ changing names to refresh" - sleep 60 - ceph orch ps - ceph versions - ceph orch upgrade status - ceph health detail - ceph versions | jq -e '.overall | length == 1' - ceph versions | jq -e '.overall | keys' | grep $sha1 - ceph orch ls | grep '^osd ' - cephadm.shell: mon.a: - ceph orch upgrade ls - ceph orch upgrade ls --image quay.io/ceph/ceph --show-all-versions | grep 16.2.0 - ceph orch upgrade ls --image quay.io/ceph/ceph --tags | grep v16.2.2 teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-09_11:23:05 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-09T18:15:17.360 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa; will attempt to use it 2026-03-09T18:15:17.361 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks 2026-03-09T18:15:17.361 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-09T18:15:17.361 INFO:teuthology.task.internal:Checking packages... 2026-03-09T18:15:17.361 INFO:teuthology.task.internal:Checking packages for os_type 'ubuntu', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-09T18:15:17.361 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-09T18:15:17.361 INFO:teuthology.packaging:ref: None 2026-03-09T18:15:17.361 INFO:teuthology.packaging:tag: None 2026-03-09T18:15:17.361 INFO:teuthology.packaging:branch: squid 2026-03-09T18:15:17.361 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:15:17.361 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=squid 2026-03-09T18:15:17.997 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678-ge911bdeb-1jammy 2026-03-09T18:15:17.998 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-09T18:15:17.999 INFO:teuthology.task.internal:no buildpackages task found 2026-03-09T18:15:17.999 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-09T18:15:17.999 INFO:teuthology.task.internal:Saving configuration 2026-03-09T18:15:18.007 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-09T18:15:18.008 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-09T18:15:18.015 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm00.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/602', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 18:14:13.828803', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:00', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOnC270SwkRq77PwhND1+gtY340lrp7TWIE75KRrVsJEWdbkYnhusGHffK2D8BZ2wmwi0ek2WxvFRNqMoCoi050='} 2026-03-09T18:15:18.021 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm08.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/602', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 18:14:13.828319', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:08', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG4KxtV8IR3AH1bo1ccp1YlpE7xCeSStAw22griJZcLkZzXaB8gdmXPVyrXj/15Se7kKytT3BZjT2q9K9SjB0Wg='} 2026-03-09T18:15:18.021 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-09T18:15:18.022 INFO:teuthology.task.internal:roles: ubuntu@vm00.local - ['mon.a', 'mon.c', 'mgr.y', 'osd.0', 'osd.1', 'osd.2', 'osd.3', 'client.0', 'node-exporter.a', 'alertmanager.a'] 2026-03-09T18:15:18.022 INFO:teuthology.task.internal:roles: ubuntu@vm08.local - ['mon.b', 'mgr.x', 'osd.4', 'osd.5', 'osd.6', 'osd.7', 'client.1', 'prometheus.a', 'grafana.a', 'node-exporter.b'] 2026-03-09T18:15:18.022 INFO:teuthology.run_tasks:Running task console_log... 2026-03-09T18:15:18.064 DEBUG:teuthology.task.console_log:vm00 does not support IPMI; excluding 2026-03-09T18:15:18.070 DEBUG:teuthology.task.console_log:vm08 does not support IPMI; excluding 2026-03-09T18:15:18.073 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7f6ccdd72170>, signals=[15]) 2026-03-09T18:15:18.073 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-09T18:15:18.074 INFO:teuthology.task.internal:Opening connections... 2026-03-09T18:15:18.074 DEBUG:teuthology.task.internal:connecting to ubuntu@vm00.local 2026-03-09T18:15:18.075 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm00.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T18:15:18.135 DEBUG:teuthology.task.internal:connecting to ubuntu@vm08.local 2026-03-09T18:15:18.136 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm08.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T18:15:18.197 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-09T18:15:18.198 DEBUG:teuthology.orchestra.run.vm00:> uname -m 2026-03-09T18:15:18.229 INFO:teuthology.orchestra.run.vm00.stdout:x86_64 2026-03-09T18:15:18.229 DEBUG:teuthology.orchestra.run.vm00:> cat /etc/os-release 2026-03-09T18:15:18.275 INFO:teuthology.orchestra.run.vm00.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-09T18:15:18.275 INFO:teuthology.orchestra.run.vm00.stdout:NAME="Ubuntu" 2026-03-09T18:15:18.275 INFO:teuthology.orchestra.run.vm00.stdout:VERSION_ID="22.04" 2026-03-09T18:15:18.276 INFO:teuthology.orchestra.run.vm00.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-09T18:15:18.276 INFO:teuthology.orchestra.run.vm00.stdout:VERSION_CODENAME=jammy 2026-03-09T18:15:18.276 INFO:teuthology.orchestra.run.vm00.stdout:ID=ubuntu 2026-03-09T18:15:18.276 INFO:teuthology.orchestra.run.vm00.stdout:ID_LIKE=debian 2026-03-09T18:15:18.276 INFO:teuthology.orchestra.run.vm00.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-09T18:15:18.276 INFO:teuthology.orchestra.run.vm00.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-09T18:15:18.276 INFO:teuthology.orchestra.run.vm00.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-09T18:15:18.276 INFO:teuthology.orchestra.run.vm00.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-09T18:15:18.276 INFO:teuthology.orchestra.run.vm00.stdout:UBUNTU_CODENAME=jammy 2026-03-09T18:15:18.276 INFO:teuthology.lock.ops:Updating vm00.local on lock server 2026-03-09T18:15:18.282 DEBUG:teuthology.orchestra.run.vm08:> uname -m 2026-03-09T18:15:18.285 INFO:teuthology.orchestra.run.vm08.stdout:x86_64 2026-03-09T18:15:18.285 DEBUG:teuthology.orchestra.run.vm08:> cat /etc/os-release 2026-03-09T18:15:18.330 INFO:teuthology.orchestra.run.vm08.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-09T18:15:18.330 INFO:teuthology.orchestra.run.vm08.stdout:NAME="Ubuntu" 2026-03-09T18:15:18.330 INFO:teuthology.orchestra.run.vm08.stdout:VERSION_ID="22.04" 2026-03-09T18:15:18.330 INFO:teuthology.orchestra.run.vm08.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-09T18:15:18.330 INFO:teuthology.orchestra.run.vm08.stdout:VERSION_CODENAME=jammy 2026-03-09T18:15:18.330 INFO:teuthology.orchestra.run.vm08.stdout:ID=ubuntu 2026-03-09T18:15:18.330 INFO:teuthology.orchestra.run.vm08.stdout:ID_LIKE=debian 2026-03-09T18:15:18.330 INFO:teuthology.orchestra.run.vm08.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-09T18:15:18.330 INFO:teuthology.orchestra.run.vm08.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-09T18:15:18.330 INFO:teuthology.orchestra.run.vm08.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-09T18:15:18.330 INFO:teuthology.orchestra.run.vm08.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-09T18:15:18.330 INFO:teuthology.orchestra.run.vm08.stdout:UBUNTU_CODENAME=jammy 2026-03-09T18:15:18.330 INFO:teuthology.lock.ops:Updating vm08.local on lock server 2026-03-09T18:15:18.335 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-09T18:15:18.338 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-09T18:15:18.339 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-09T18:15:18.339 DEBUG:teuthology.orchestra.run.vm00:> test '!' -e /home/ubuntu/cephtest 2026-03-09T18:15:18.340 DEBUG:teuthology.orchestra.run.vm08:> test '!' -e /home/ubuntu/cephtest 2026-03-09T18:15:18.374 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-09T18:15:18.375 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-09T18:15:18.375 DEBUG:teuthology.orchestra.run.vm00:> test -z $(ls -A /var/lib/ceph) 2026-03-09T18:15:18.385 DEBUG:teuthology.orchestra.run.vm08:> test -z $(ls -A /var/lib/ceph) 2026-03-09T18:15:18.387 INFO:teuthology.orchestra.run.vm00.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T18:15:18.418 INFO:teuthology.orchestra.run.vm08.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T18:15:18.419 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-09T18:15:18.428 DEBUG:teuthology.orchestra.run.vm00:> test -e /ceph-qa-ready 2026-03-09T18:15:18.431 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:15:18.665 DEBUG:teuthology.orchestra.run.vm08:> test -e /ceph-qa-ready 2026-03-09T18:15:18.667 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:15:19.083 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-09T18:15:19.084 INFO:teuthology.task.internal:Creating test directory... 2026-03-09T18:15:19.084 DEBUG:teuthology.orchestra.run.vm00:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T18:15:19.085 DEBUG:teuthology.orchestra.run.vm08:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T18:15:19.088 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-09T18:15:19.089 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-09T18:15:19.090 INFO:teuthology.task.internal:Creating archive directory... 2026-03-09T18:15:19.090 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T18:15:19.128 DEBUG:teuthology.orchestra.run.vm08:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T18:15:19.135 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-09T18:15:19.136 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-09T18:15:19.136 DEBUG:teuthology.orchestra.run.vm00:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T18:15:19.173 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:15:19.173 DEBUG:teuthology.orchestra.run.vm08:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T18:15:19.176 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:15:19.176 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T18:15:19.216 DEBUG:teuthology.orchestra.run.vm08:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T18:15:19.224 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T18:15:19.227 INFO:teuthology.orchestra.run.vm08.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T18:15:19.228 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T18:15:19.231 INFO:teuthology.orchestra.run.vm08.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T18:15:19.232 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-09T18:15:19.234 INFO:teuthology.task.internal:Configuring sudo... 2026-03-09T18:15:19.234 DEBUG:teuthology.orchestra.run.vm00:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T18:15:19.272 DEBUG:teuthology.orchestra.run.vm08:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T18:15:19.282 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-09T18:15:19.289 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-09T18:15:19.289 DEBUG:teuthology.orchestra.run.vm00:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T18:15:19.320 DEBUG:teuthology.orchestra.run.vm08:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T18:15:19.329 DEBUG:teuthology.orchestra.run.vm00:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T18:15:19.370 DEBUG:teuthology.orchestra.run.vm00:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T18:15:19.414 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-09T18:15:19.414 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T18:15:19.467 DEBUG:teuthology.orchestra.run.vm08:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T18:15:19.470 DEBUG:teuthology.orchestra.run.vm08:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T18:15:19.513 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-09T18:15:19.513 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T18:15:19.562 DEBUG:teuthology.orchestra.run.vm00:> sudo service rsyslog restart 2026-03-09T18:15:19.563 DEBUG:teuthology.orchestra.run.vm08:> sudo service rsyslog restart 2026-03-09T18:15:19.620 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-09T18:15:19.623 INFO:teuthology.task.internal:Starting timer... 2026-03-09T18:15:19.623 INFO:teuthology.run_tasks:Running task pcp... 2026-03-09T18:15:19.627 INFO:teuthology.run_tasks:Running task selinux... 2026-03-09T18:15:19.629 INFO:teuthology.task.selinux:Excluding vm00: VMs are not yet supported 2026-03-09T18:15:19.629 INFO:teuthology.task.selinux:Excluding vm08: VMs are not yet supported 2026-03-09T18:15:19.629 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-09T18:15:19.629 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-09T18:15:19.629 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-09T18:15:19.629 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-09T18:15:19.631 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-09T18:15:19.631 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-09T18:15:19.632 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-09T18:15:20.293 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-09T18:15:20.299 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-09T18:15:20.300 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventory33ajwwdw --limit vm00.local,vm08.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-09T18:18:21.704 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm00.local'), Remote(name='ubuntu@vm08.local')] 2026-03-09T18:18:21.704 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm00.local' 2026-03-09T18:18:21.705 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm00.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T18:18:21.761 DEBUG:teuthology.orchestra.run.vm00:> true 2026-03-09T18:18:21.973 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm00.local' 2026-03-09T18:18:21.973 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm08.local' 2026-03-09T18:18:21.974 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm08.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T18:18:22.037 DEBUG:teuthology.orchestra.run.vm08:> true 2026-03-09T18:18:22.264 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm08.local' 2026-03-09T18:18:22.265 INFO:teuthology.run_tasks:Running task clock... 2026-03-09T18:18:22.268 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-09T18:18:22.268 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T18:18:22.268 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T18:18:22.269 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T18:18:22.269 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T18:18:22.285 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:22 ntpd[16084]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-09T18:18:22.285 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:22 ntpd[16084]: Command line: ntpd -gq 2026-03-09T18:18:22.285 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:22 ntpd[16084]: ---------------------------------------------------- 2026-03-09T18:18:22.285 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:22 ntpd[16084]: ntp-4 is maintained by Network Time Foundation, 2026-03-09T18:18:22.286 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:22 ntpd[16084]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-09T18:18:22.286 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:22 ntpd[16084]: corporation. Support and training for ntp-4 are 2026-03-09T18:18:22.286 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:22 ntpd[16084]: available at https://www.nwtime.org/support 2026-03-09T18:18:22.286 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:22 ntpd[16084]: ---------------------------------------------------- 2026-03-09T18:18:22.286 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:22 ntpd[16084]: proto: precision = 0.040 usec (-24) 2026-03-09T18:18:22.286 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:22 ntpd[16084]: basedate set to 2022-02-04 2026-03-09T18:18:22.286 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:22 ntpd[16084]: gps base set to 2022-02-06 (week 2196) 2026-03-09T18:18:22.286 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:22 ntpd[16084]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-09T18:18:22.286 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:22 ntpd[16084]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-09T18:18:22.286 INFO:teuthology.orchestra.run.vm00.stderr: 9 Mar 18:18:22 ntpd[16084]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 72 days ago 2026-03-09T18:18:22.286 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:22 ntpd[16084]: Listen and drop on 0 v6wildcard [::]:123 2026-03-09T18:18:22.286 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:22 ntpd[16084]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-09T18:18:22.286 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:22 ntpd[16084]: Listen normally on 2 lo 127.0.0.1:123 2026-03-09T18:18:22.286 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:22 ntpd[16084]: Listen normally on 3 ens3 192.168.123.100:123 2026-03-09T18:18:22.286 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:22 ntpd[16084]: Listen normally on 4 lo [::1]:123 2026-03-09T18:18:22.286 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:22 ntpd[16084]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:0%2]:123 2026-03-09T18:18:22.286 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:22 ntpd[16084]: Listening on routing socket on fd #22 for interface updates 2026-03-09T18:18:22.323 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:22 ntpd[16102]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-09T18:18:22.323 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:22 ntpd[16102]: Command line: ntpd -gq 2026-03-09T18:18:22.323 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:22 ntpd[16102]: ---------------------------------------------------- 2026-03-09T18:18:22.323 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:22 ntpd[16102]: ntp-4 is maintained by Network Time Foundation, 2026-03-09T18:18:22.323 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:22 ntpd[16102]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-09T18:18:22.323 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:22 ntpd[16102]: corporation. Support and training for ntp-4 are 2026-03-09T18:18:22.323 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:22 ntpd[16102]: available at https://www.nwtime.org/support 2026-03-09T18:18:22.323 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:22 ntpd[16102]: ---------------------------------------------------- 2026-03-09T18:18:22.324 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:22 ntpd[16102]: proto: precision = 0.029 usec (-25) 2026-03-09T18:18:22.324 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:22 ntpd[16102]: basedate set to 2022-02-04 2026-03-09T18:18:22.324 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:22 ntpd[16102]: gps base set to 2022-02-06 (week 2196) 2026-03-09T18:18:22.324 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:22 ntpd[16102]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-09T18:18:22.324 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:22 ntpd[16102]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-09T18:18:22.325 INFO:teuthology.orchestra.run.vm08.stderr: 9 Mar 18:18:22 ntpd[16102]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 72 days ago 2026-03-09T18:18:22.325 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:22 ntpd[16102]: Listen and drop on 0 v6wildcard [::]:123 2026-03-09T18:18:22.325 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:22 ntpd[16102]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-09T18:18:22.325 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:22 ntpd[16102]: Listen normally on 2 lo 127.0.0.1:123 2026-03-09T18:18:22.326 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:22 ntpd[16102]: Listen normally on 3 ens3 192.168.123.108:123 2026-03-09T18:18:22.326 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:22 ntpd[16102]: Listen normally on 4 lo [::1]:123 2026-03-09T18:18:22.326 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:22 ntpd[16102]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:8%2]:123 2026-03-09T18:18:22.326 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:22 ntpd[16102]: Listening on routing socket on fd #22 for interface updates 2026-03-09T18:18:23.285 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:23 ntpd[16084]: Soliciting pool server 178.63.67.56 2026-03-09T18:18:23.325 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:23 ntpd[16102]: Soliciting pool server 178.63.67.56 2026-03-09T18:18:24.285 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:24 ntpd[16084]: Soliciting pool server 85.214.38.116 2026-03-09T18:18:24.324 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:24 ntpd[16102]: Soliciting pool server 85.214.38.116 2026-03-09T18:18:24.405 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:24 ntpd[16102]: Soliciting pool server 131.234.220.232 2026-03-09T18:18:24.405 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:24 ntpd[16084]: Soliciting pool server 131.234.220.232 2026-03-09T18:18:25.284 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:25 ntpd[16084]: Soliciting pool server 45.92.216.108 2026-03-09T18:18:25.285 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:25 ntpd[16084]: Soliciting pool server 94.130.23.46 2026-03-09T18:18:25.285 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:25 ntpd[16084]: Soliciting pool server 148.251.54.81 2026-03-09T18:18:25.324 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:25 ntpd[16102]: Soliciting pool server 45.92.216.108 2026-03-09T18:18:25.324 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:25 ntpd[16102]: Soliciting pool server 94.130.23.46 2026-03-09T18:18:25.324 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:25 ntpd[16102]: Soliciting pool server 148.251.54.81 2026-03-09T18:18:26.285 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:26 ntpd[16084]: Soliciting pool server 141.84.43.73 2026-03-09T18:18:26.285 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:26 ntpd[16084]: Soliciting pool server 141.84.43.75 2026-03-09T18:18:26.285 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:26 ntpd[16084]: Soliciting pool server 193.141.27.1 2026-03-09T18:18:26.291 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:26 ntpd[16084]: Soliciting pool server 162.159.200.123 2026-03-09T18:18:26.324 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:26 ntpd[16102]: Soliciting pool server 141.84.43.73 2026-03-09T18:18:26.324 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:26 ntpd[16102]: Soliciting pool server 141.84.43.75 2026-03-09T18:18:26.324 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:26 ntpd[16102]: Soliciting pool server 193.141.27.1 2026-03-09T18:18:26.325 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:26 ntpd[16102]: Soliciting pool server 162.159.200.123 2026-03-09T18:18:27.284 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:27 ntpd[16084]: Soliciting pool server 93.241.86.156 2026-03-09T18:18:27.285 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:27 ntpd[16084]: Soliciting pool server 77.42.16.222 2026-03-09T18:18:27.285 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:27 ntpd[16084]: Soliciting pool server 194.59.205.229 2026-03-09T18:18:27.285 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:27 ntpd[16084]: Soliciting pool server 185.125.190.58 2026-03-09T18:18:27.324 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:27 ntpd[16102]: Soliciting pool server 93.241.86.156 2026-03-09T18:18:27.324 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:27 ntpd[16102]: Soliciting pool server 77.42.16.222 2026-03-09T18:18:27.325 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:27 ntpd[16102]: Soliciting pool server 185.125.190.58 2026-03-09T18:18:28.285 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:28 ntpd[16084]: Soliciting pool server 91.189.91.157 2026-03-09T18:18:28.285 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:28 ntpd[16084]: Soliciting pool server 144.76.76.107 2026-03-09T18:18:28.285 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:28 ntpd[16084]: Soliciting pool server 2001:4ca0:4f0e:20::123:3 2026-03-09T18:18:28.324 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:28 ntpd[16102]: Soliciting pool server 91.189.91.157 2026-03-09T18:18:28.324 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:28 ntpd[16102]: Soliciting pool server 144.76.76.107 2026-03-09T18:18:28.324 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:28 ntpd[16102]: Soliciting pool server 2001:4ca0:4f0e:20::123:3 2026-03-09T18:18:29.324 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:29 ntpd[16102]: Soliciting pool server 185.125.190.57 2026-03-09T18:18:29.324 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:29 ntpd[16102]: Soliciting pool server 116.203.244.102 2026-03-09T18:18:30.324 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:30 ntpd[16102]: Soliciting pool server 185.125.190.56 2026-03-09T18:18:32.310 INFO:teuthology.orchestra.run.vm00.stdout: 9 Mar 18:18:32 ntpd[16084]: ntpd: time slew +0.002843 s 2026-03-09T18:18:32.310 INFO:teuthology.orchestra.run.vm00.stdout:ntpd: time slew +0.002843s 2026-03-09T18:18:32.334 INFO:teuthology.orchestra.run.vm00.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T18:18:32.334 INFO:teuthology.orchestra.run.vm00.stdout:============================================================================== 2026-03-09T18:18:32.334 INFO:teuthology.orchestra.run.vm00.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:18:32.334 INFO:teuthology.orchestra.run.vm00.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:18:32.334 INFO:teuthology.orchestra.run.vm00.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:18:32.335 INFO:teuthology.orchestra.run.vm00.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:18:32.335 INFO:teuthology.orchestra.run.vm00.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:18:32.350 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 18:18:32 ntpd[16102]: ntpd: time slew +0.000324 s 2026-03-09T18:18:32.350 INFO:teuthology.orchestra.run.vm08.stdout:ntpd: time slew +0.000324s 2026-03-09T18:18:32.370 INFO:teuthology.orchestra.run.vm08.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T18:18:32.370 INFO:teuthology.orchestra.run.vm08.stdout:============================================================================== 2026-03-09T18:18:32.370 INFO:teuthology.orchestra.run.vm08.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:18:32.370 INFO:teuthology.orchestra.run.vm08.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:18:32.370 INFO:teuthology.orchestra.run.vm08.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:18:32.370 INFO:teuthology.orchestra.run.vm08.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:18:32.370 INFO:teuthology.orchestra.run.vm08.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:18:32.370 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-09T18:18:32.427 INFO:tasks.cephadm:Config: {'cephadm_branch': 'v17.2.0', 'cephadm_git_url': 'https://github.com/ceph/ceph', 'image': 'quay.io/ceph/ceph:v17.2.0', 'conf': {'global': {'mon election default strategy': 3}, 'mgr': {'debug mgr': 20, 'debug ms': 1, 'mgr/cephadm/use_agent': False}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'CEPHADM_STRAY_DAEMON', 'CEPHADM_FAILED_DAEMON', 'CEPHADM_AGENT_DOWN'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-09T18:18:32.427 INFO:tasks.cephadm:Cluster image is quay.io/ceph/ceph:v17.2.0 2026-03-09T18:18:32.427 INFO:tasks.cephadm:Cluster fsid is 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:18:32.427 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-09T18:18:32.427 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.100', 'mon.c': '[v2:192.168.123.100:3301,v1:192.168.123.100:6790]', 'mon.b': '192.168.123.108'} 2026-03-09T18:18:32.427 INFO:tasks.cephadm:First mon is mon.a on vm00 2026-03-09T18:18:32.427 INFO:tasks.cephadm:First mgr is y 2026-03-09T18:18:32.427 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-09T18:18:32.427 DEBUG:teuthology.orchestra.run.vm00:> sudo hostname $(hostname -s) 2026-03-09T18:18:32.436 DEBUG:teuthology.orchestra.run.vm08:> sudo hostname $(hostname -s) 2026-03-09T18:18:32.445 INFO:tasks.cephadm:Downloading cephadm (repo https://github.com/ceph/ceph ref v17.2.0)... 2026-03-09T18:18:32.445 DEBUG:teuthology.orchestra.run.vm00:> curl --silent https://raw.githubusercontent.com/ceph/ceph/v17.2.0/src/cephadm/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-09T18:18:32.712 INFO:teuthology.orchestra.run.vm00.stdout:-rw-rw-r-- 1 ubuntu ubuntu 320521 Mar 9 18:18 /home/ubuntu/cephtest/cephadm 2026-03-09T18:18:32.713 DEBUG:teuthology.orchestra.run.vm08:> curl --silent https://raw.githubusercontent.com/ceph/ceph/v17.2.0/src/cephadm/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-09T18:18:32.800 INFO:teuthology.orchestra.run.vm08.stdout:-rw-rw-r-- 1 ubuntu ubuntu 320521 Mar 9 18:18 /home/ubuntu/cephtest/cephadm 2026-03-09T18:18:32.800 DEBUG:teuthology.orchestra.run.vm00:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-09T18:18:32.805 DEBUG:teuthology.orchestra.run.vm08:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-09T18:18:32.813 INFO:tasks.cephadm:Pulling image quay.io/ceph/ceph:v17.2.0 on all hosts... 2026-03-09T18:18:32.813 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 pull 2026-03-09T18:18:32.848 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 pull 2026-03-09T18:18:32.943 INFO:teuthology.orchestra.run.vm00.stderr:Pulling container image quay.io/ceph/ceph:v17.2.0... 2026-03-09T18:18:32.945 INFO:teuthology.orchestra.run.vm08.stderr:Pulling container image quay.io/ceph/ceph:v17.2.0... 2026-03-09T18:19:11.160 INFO:teuthology.orchestra.run.vm08.stdout:{ 2026-03-09T18:19:11.160 INFO:teuthology.orchestra.run.vm08.stdout: "ceph_version": "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)", 2026-03-09T18:19:11.160 INFO:teuthology.orchestra.run.vm08.stdout: "image_id": "e1d6a67b021eb077ee22bf650f1a9fb1980a2cf5c36bdb9cba9eac6de8f702d9", 2026-03-09T18:19:11.160 INFO:teuthology.orchestra.run.vm08.stdout: "repo_digests": [ 2026-03-09T18:19:11.160 INFO:teuthology.orchestra.run.vm08.stdout: "quay.io/ceph/ceph@sha256:12a0a4f43413fd97a14a3d47a3451b2d2df50020835bb93db666209f3f77617a" 2026-03-09T18:19:11.160 INFO:teuthology.orchestra.run.vm08.stdout: ] 2026-03-09T18:19:11.160 INFO:teuthology.orchestra.run.vm08.stdout:} 2026-03-09T18:19:11.487 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:19:11.487 INFO:teuthology.orchestra.run.vm00.stdout: "ceph_version": "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)", 2026-03-09T18:19:11.487 INFO:teuthology.orchestra.run.vm00.stdout: "image_id": "e1d6a67b021eb077ee22bf650f1a9fb1980a2cf5c36bdb9cba9eac6de8f702d9", 2026-03-09T18:19:11.487 INFO:teuthology.orchestra.run.vm00.stdout: "repo_digests": [ 2026-03-09T18:19:11.487 INFO:teuthology.orchestra.run.vm00.stdout: "quay.io/ceph/ceph@sha256:12a0a4f43413fd97a14a3d47a3451b2d2df50020835bb93db666209f3f77617a" 2026-03-09T18:19:11.487 INFO:teuthology.orchestra.run.vm00.stdout: ] 2026-03-09T18:19:11.487 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:19:11.500 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /etc/ceph 2026-03-09T18:19:11.508 DEBUG:teuthology.orchestra.run.vm08:> sudo mkdir -p /etc/ceph 2026-03-09T18:19:11.517 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 777 /etc/ceph 2026-03-09T18:19:11.559 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod 777 /etc/ceph 2026-03-09T18:19:11.567 INFO:tasks.cephadm:Writing seed config... 2026-03-09T18:19:11.568 INFO:tasks.cephadm: override: [global] mon election default strategy = 3 2026-03-09T18:19:11.568 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-09T18:19:11.568 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-09T18:19:11.568 INFO:tasks.cephadm: override: [mgr] mgr/cephadm/use_agent = False 2026-03-09T18:19:11.568 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-09T18:19:11.568 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-09T18:19:11.568 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-09T18:19:11.568 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-09T18:19:11.568 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-09T18:19:11.568 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-09T18:19:11.569 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-09T18:19:11.569 DEBUG:teuthology.orchestra.run.vm00:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-09T18:19:11.605 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = 614f4990-1be4-11f1-8b84-dfd1edd9d965 mon election default strategy = 3 [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = true bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 mgr/cephadm/use_agent = False [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-09T18:19:11.606 DEBUG:teuthology.orchestra.run.vm00:mon.a> sudo journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mon.a.service 2026-03-09T18:19:11.647 DEBUG:teuthology.orchestra.run.vm00:mgr.y> sudo journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mgr.y.service 2026-03-09T18:19:11.691 INFO:tasks.cephadm:Bootstrapping... 2026-03-09T18:19:11.691 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 -v bootstrap --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.100 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-09T18:19:11.832 INFO:teuthology.orchestra.run.vm00.stderr:-------------------------------------------------------------------------------- 2026-03-09T18:19:11.832 INFO:teuthology.orchestra.run.vm00.stderr:cephadm ['--image', 'quay.io/ceph/ceph:v17.2.0', '-v', 'bootstrap', '--fsid', '614f4990-1be4-11f1-8b84-dfd1edd9d965', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'y', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.100', '--skip-admin-label'] 2026-03-09T18:19:11.832 INFO:teuthology.orchestra.run.vm00.stderr:Verifying podman|docker is present... 2026-03-09T18:19:11.832 INFO:teuthology.orchestra.run.vm00.stderr:Verifying lvm2 is present... 2026-03-09T18:19:11.832 INFO:teuthology.orchestra.run.vm00.stderr:Verifying time synchronization is in place... 2026-03-09T18:19:11.835 INFO:teuthology.orchestra.run.vm00.stderr:systemctl: Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T18:19:11.838 INFO:teuthology.orchestra.run.vm00.stderr:systemctl: inactive 2026-03-09T18:19:11.841 INFO:teuthology.orchestra.run.vm00.stderr:systemctl: Failed to get unit file state for chronyd.service: No such file or directory 2026-03-09T18:19:11.843 INFO:teuthology.orchestra.run.vm00.stderr:systemctl: inactive 2026-03-09T18:19:11.846 INFO:teuthology.orchestra.run.vm00.stderr:systemctl: masked 2026-03-09T18:19:11.848 INFO:teuthology.orchestra.run.vm00.stderr:systemctl: inactive 2026-03-09T18:19:11.850 INFO:teuthology.orchestra.run.vm00.stderr:systemctl: Failed to get unit file state for ntpd.service: No such file or directory 2026-03-09T18:19:11.852 INFO:teuthology.orchestra.run.vm00.stderr:systemctl: inactive 2026-03-09T18:19:11.855 INFO:teuthology.orchestra.run.vm00.stderr:systemctl: enabled 2026-03-09T18:19:11.857 INFO:teuthology.orchestra.run.vm00.stderr:systemctl: active 2026-03-09T18:19:11.858 INFO:teuthology.orchestra.run.vm00.stderr:Unit ntp.service is enabled and running 2026-03-09T18:19:11.858 INFO:teuthology.orchestra.run.vm00.stderr:Repeating the final host check... 2026-03-09T18:19:11.858 INFO:teuthology.orchestra.run.vm00.stderr:docker (/usr/bin/docker) is present 2026-03-09T18:19:11.858 INFO:teuthology.orchestra.run.vm00.stderr:systemctl is present 2026-03-09T18:19:11.858 INFO:teuthology.orchestra.run.vm00.stderr:lvcreate is present 2026-03-09T18:19:11.859 INFO:teuthology.orchestra.run.vm00.stderr:systemctl: Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T18:19:11.861 INFO:teuthology.orchestra.run.vm00.stderr:systemctl: inactive 2026-03-09T18:19:11.863 INFO:teuthology.orchestra.run.vm00.stderr:systemctl: Failed to get unit file state for chronyd.service: No such file or directory 2026-03-09T18:19:11.866 INFO:teuthology.orchestra.run.vm00.stderr:systemctl: inactive 2026-03-09T18:19:11.868 INFO:teuthology.orchestra.run.vm00.stderr:systemctl: masked 2026-03-09T18:19:11.871 INFO:teuthology.orchestra.run.vm00.stderr:systemctl: inactive 2026-03-09T18:19:11.873 INFO:teuthology.orchestra.run.vm00.stderr:systemctl: Failed to get unit file state for ntpd.service: No such file or directory 2026-03-09T18:19:11.877 INFO:teuthology.orchestra.run.vm00.stderr:systemctl: inactive 2026-03-09T18:19:11.881 INFO:teuthology.orchestra.run.vm00.stderr:systemctl: enabled 2026-03-09T18:19:11.884 INFO:teuthology.orchestra.run.vm00.stderr:systemctl: active 2026-03-09T18:19:11.885 INFO:teuthology.orchestra.run.vm00.stderr:Unit ntp.service is enabled and running 2026-03-09T18:19:11.885 INFO:teuthology.orchestra.run.vm00.stderr:Host looks OK 2026-03-09T18:19:11.885 INFO:teuthology.orchestra.run.vm00.stderr:Cluster fsid: 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:19:11.885 INFO:teuthology.orchestra.run.vm00.stderr:Acquiring lock 139774225900512 on /run/cephadm/614f4990-1be4-11f1-8b84-dfd1edd9d965.lock 2026-03-09T18:19:11.885 INFO:teuthology.orchestra.run.vm00.stderr:Lock 139774225900512 acquired on /run/cephadm/614f4990-1be4-11f1-8b84-dfd1edd9d965.lock 2026-03-09T18:19:11.885 INFO:teuthology.orchestra.run.vm00.stderr:Verifying IP 192.168.123.100 port 3300 ... 2026-03-09T18:19:11.885 INFO:teuthology.orchestra.run.vm00.stderr:Verifying IP 192.168.123.100 port 6789 ... 2026-03-09T18:19:11.885 INFO:teuthology.orchestra.run.vm00.stderr:Base mon IP is 192.168.123.100, final addrv is [v2:192.168.123.100:3300,v1:192.168.123.100:6789] 2026-03-09T18:19:11.887 INFO:teuthology.orchestra.run.vm00.stderr:/usr/sbin/ip: default via 192.168.123.1 dev ens3 proto dhcp src 192.168.123.100 metric 100 2026-03-09T18:19:11.887 INFO:teuthology.orchestra.run.vm00.stderr:/usr/sbin/ip: 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 2026-03-09T18:19:11.887 INFO:teuthology.orchestra.run.vm00.stderr:/usr/sbin/ip: 192.168.123.0/24 dev ens3 proto kernel scope link src 192.168.123.100 metric 100 2026-03-09T18:19:11.887 INFO:teuthology.orchestra.run.vm00.stderr:/usr/sbin/ip: 192.168.123.1 dev ens3 proto dhcp scope link src 192.168.123.100 metric 100 2026-03-09T18:19:11.888 INFO:teuthology.orchestra.run.vm00.stderr:/usr/sbin/ip: ::1 dev lo proto kernel metric 256 pref medium 2026-03-09T18:19:11.888 INFO:teuthology.orchestra.run.vm00.stderr:/usr/sbin/ip: fe80::/64 dev ens3 proto kernel metric 256 pref medium 2026-03-09T18:19:11.889 INFO:teuthology.orchestra.run.vm00.stderr:/usr/sbin/ip: 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-09T18:19:11.889 INFO:teuthology.orchestra.run.vm00.stderr:/usr/sbin/ip: inet6 ::1/128 scope host 2026-03-09T18:19:11.889 INFO:teuthology.orchestra.run.vm00.stderr:/usr/sbin/ip: valid_lft forever preferred_lft forever 2026-03-09T18:19:11.889 INFO:teuthology.orchestra.run.vm00.stderr:/usr/sbin/ip: 2: ens3: mtu 1500 state UP qlen 1000 2026-03-09T18:19:11.889 INFO:teuthology.orchestra.run.vm00.stderr:/usr/sbin/ip: inet6 fe80::5055:ff:fe00:0/64 scope link 2026-03-09T18:19:11.889 INFO:teuthology.orchestra.run.vm00.stderr:/usr/sbin/ip: valid_lft forever preferred_lft forever 2026-03-09T18:19:11.890 INFO:teuthology.orchestra.run.vm00.stderr:Mon IP `192.168.123.100` is in CIDR network `192.168.123.0/24` 2026-03-09T18:19:11.890 INFO:teuthology.orchestra.run.vm00.stderr:- internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-09T18:19:11.890 INFO:teuthology.orchestra.run.vm00.stderr:Pulling container image quay.io/ceph/ceph:v17.2.0... 2026-03-09T18:19:12.944 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/docker: v17.2.0: Pulling from ceph/ceph 2026-03-09T18:19:12.953 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/docker: Digest: sha256:12a0a4f43413fd97a14a3d47a3451b2d2df50020835bb93db666209f3f77617a 2026-03-09T18:19:12.953 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/docker: Status: Image is up to date for quay.io/ceph/ceph:v17.2.0 2026-03-09T18:19:12.954 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/docker: quay.io/ceph/ceph:v17.2.0 2026-03-09T18:19:13.095 INFO:teuthology.orchestra.run.vm00.stderr:ceph: ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable) 2026-03-09T18:19:13.135 INFO:teuthology.orchestra.run.vm00.stderr:Ceph version: ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable) 2026-03-09T18:19:13.136 INFO:teuthology.orchestra.run.vm00.stderr:Extracting ceph user uid/gid from container image... 2026-03-09T18:19:13.216 INFO:teuthology.orchestra.run.vm00.stderr:stat: 167 167 2026-03-09T18:19:13.246 INFO:teuthology.orchestra.run.vm00.stderr:Creating initial keys... 2026-03-09T18:19:13.324 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-authtool: AQAhD69pivhAExAAi1F5FV8gbAWi8fakQ9Tsgg== 2026-03-09T18:19:13.428 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-authtool: AQAhD69phRZpGRAATPV8hHZB7wrCbo41A8iLlA== 2026-03-09T18:19:13.542 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-authtool: AQAhD69pTYw3IBAA+h0ZhD5ts2QdXpr1+eBc+A== 2026-03-09T18:19:13.589 INFO:teuthology.orchestra.run.vm00.stderr:Creating initial monmap... 2026-03-09T18:19:13.684 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/monmaptool: /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T18:19:13.684 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/monmaptool: setting min_mon_release = octopus 2026-03-09T18:19:13.684 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/monmaptool: /usr/bin/monmaptool: set fsid to 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:19:13.684 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/monmaptool: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T18:19:13.722 INFO:teuthology.orchestra.run.vm00.stderr:monmaptool for a [v2:192.168.123.100:3300,v1:192.168.123.100:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T18:19:13.722 INFO:teuthology.orchestra.run.vm00.stderr:setting min_mon_release = octopus 2026-03-09T18:19:13.722 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/monmaptool: set fsid to 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:19:13.722 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T18:19:13.722 INFO:teuthology.orchestra.run.vm00.stderr: 2026-03-09T18:19:13.722 INFO:teuthology.orchestra.run.vm00.stderr:Creating mon... 2026-03-09T18:19:13.818 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.813+0000 7f52a9b11880 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T18:19:13.818 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.813+0000 7f52a9b11880 1 imported monmap: 2026-03-09T18:19:13.818 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: epoch 0 2026-03-09T18:19:13.818 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:19:13.818 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: last_changed 2026-03-09T18:19:13.682656+0000 2026-03-09T18:19:13.818 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: created 2026-03-09T18:19:13.682656+0000 2026-03-09T18:19:13.818 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: min_mon_release 15 (octopus) 2026-03-09T18:19:13.818 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: election_strategy: 1 2026-03-09T18:19:13.818 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T18:19:13.818 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: 2026-03-09T18:19:13.818 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.813+0000 7f52a9b11880 0 /usr/bin/ceph-mon: set fsid to 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:19:13.819 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: RocksDB version: 6.15.5 2026-03-09T18:19:13.819 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: 2026-03-09T18:19:13.819 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Git sha rocksdb_build_git_sha:@0@ 2026-03-09T18:19:13.819 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Compile date Apr 18 2022 2026-03-09T18:19:13.823 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: DB SUMMARY 2026-03-09T18:19:13.823 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: 2026-03-09T18:19:13.823 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: DB Session ID: J4XVACYZRNGS8LD5S3CR 2026-03-09T18:19:13.823 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: 2026-03-09T18:19:13.823 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 0, files: 2026-03-09T18:19:13.823 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.error_if_exists: 0 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.create_if_missing: 1 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.env: 0x55d18cac6860 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.fs: Posix File System 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.info_log: 0x55d1c539d320 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.statistics: (nil) 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.use_fsync: 0 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.db_log_dir: 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.wal_dir: /var/lib/ceph/mon/ceph-a/store.db 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.write_buffer_manager: 0x55d1c563d950 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.new_table_reader_for_compaction_inputs: 0 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T18:19:13.824 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.unordered_write: 0 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.row_cache: None 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.wal_filter: None 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.preserve_deletes: 0 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.two_write_queues: 0 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.atomic_flush: 0 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.max_open_files: -1 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T18:19:13.825 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Compression algorithms supported: 2026-03-09T18:19:13.826 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T18:19:13.826 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: kZSTD supported: 0 2026-03-09T18:19:13.826 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: kXpressCompression supported: 0 2026-03-09T18:19:13.826 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T18:19:13.826 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T18:19:13.826 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T18:19:13.826 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: kZlibCompression supported: 1 2026-03-09T18:19:13.826 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T18:19:13.826 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.817+0000 7f52a9b11880 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T18:19:13.826 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.821+0000 7f52a9b11880 4 rocksdb: [db/db_impl/db_impl_open.cc:281] Creating manifest 1 2026-03-09T18:19:13.826 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: 2026-03-09T18:19:13.826 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: [db/version_set.cc:4725] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 2026-03-09T18:19:13.826 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: 2026-03-09T18:19:13.827 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: [db/column_family.cc:597] --------------- Options for column family [default]: 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.merge_operator: 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.compaction_filter: None 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1c5366d10) 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: cache_index_and_filter_blocks: 1 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: pin_top_level_index_and_filter: 1 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: index_type: 0 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: data_block_index_type: 0 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: index_shortening: 1 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: data_block_hash_table_util_ratio: 0.750000 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: hash_index_allow_collision: 1 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: checksum: 1 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: no_block_cache: 0 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: block_cache: 0x55d1c53ce170 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: block_cache_name: BinnedLRUCache 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: block_cache_options: 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: capacity : 536870912 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: num_shard_bits : 4 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: strict_capacity_limit : 0 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: high_pri_pool_ratio: 0.000 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: block_cache_compressed: (nil) 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: persistent_cache: (nil) 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: block_size: 4096 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: block_size_deviation: 10 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: block_restart_interval: 16 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: index_block_restart_interval: 1 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: metadata_block_size: 4096 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: partition_filters: 0 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: use_delta_encoding: 1 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: filter_policy: rocksdb.BuiltinBloomFilter 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: whole_key_filtering: 1 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: verify_compression: 0 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: read_amp_bytes_per_bit: 0 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: format_version: 4 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: enable_index_compression: 1 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: block_align: 0 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T18:19:13.828 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.compression: NoCompression 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.num_levels: 7 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T18:19:13.830 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.arena_block_size: 4194304 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.rate_limit_delay_max_milliseconds: 100 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.table_properties_collectors: 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.bloom_locality: 0 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.ttl: 2592000 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.enable_blob_files: false 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.min_blob_size: 0 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T18:19:13.831 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T18:19:13.832 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: [db/version_set.cc:4773] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-09T18:19:13.832 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: 2026-03-09T18:19:13.832 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: [db/version_set.cc:4782] Column family [default] (ID 0), log number is 0 2026-03-09T18:19:13.832 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: 2026-03-09T18:19:13.832 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.825+0000 7f52a9b11880 4 rocksdb: [db/version_set.cc:4083] Creating manifest 3 2026-03-09T18:19:13.832 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: 2026-03-09T18:19:13.832 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.829+0000 7f52a9b11880 4 rocksdb: [db/db_impl/db_impl_open.cc:1701] SstFileManager instance 0x55d1c53b4700 2026-03-09T18:19:13.832 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.829+0000 7f52a9b11880 4 rocksdb: DB pointer 0x55d1c5428000 2026-03-09T18:19:13.834 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.829+0000 7f529b6fb700 4 rocksdb: [db/db_impl/db_impl.cc:902] ------- DUMPING STATS ------- 2026-03-09T18:19:13.834 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.829+0000 7f529b6fb700 4 rocksdb: [db/db_impl/db_impl.cc:903] 2026-03-09T18:19:13.834 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: ** DB Stats ** 2026-03-09T18:19:13.834 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T18:19:13.834 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T18:19:13.834 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T18:19:13.834 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T18:19:13.834 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T18:19:13.834 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s 2026-03-09T18:19:13.834 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T18:19:13.834 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: 2026-03-09T18:19:13.834 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: ** Compaction Stats [default] ** 2026-03-09T18:19:13.834 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop 2026-03-09T18:19:13.834 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T18:19:13.834 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 2026-03-09T18:19:13.834 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 2026-03-09T18:19:13.835 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: 2026-03-09T18:19:13.835 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: ** Compaction Stats [default] ** 2026-03-09T18:19:13.835 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop 2026-03-09T18:19:13.835 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T18:19:13.835 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T18:19:13.835 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T18:19:13.835 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T18:19:13.835 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T18:19:13.835 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T18:19:13.835 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: AddFile(Keys): cumulative 0, interval 0 2026-03-09T18:19:13.835 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T18:19:13.835 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T18:19:13.835 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T18:19:13.835 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: 2026-03-09T18:19:13.836 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: ** File Read Latency Histogram By Level [default] ** 2026-03-09T18:19:13.836 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: 2026-03-09T18:19:13.836 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: ** Compaction Stats [default] ** 2026-03-09T18:19:13.836 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop 2026-03-09T18:19:13.836 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T18:19:13.836 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 2026-03-09T18:19:13.836 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 2026-03-09T18:19:13.836 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: 2026-03-09T18:19:13.836 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: ** Compaction Stats [default] ** 2026-03-09T18:19:13.836 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop 2026-03-09T18:19:13.836 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T18:19:13.836 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T18:19:13.836 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T18:19:13.836 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T18:19:13.836 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T18:19:13.836 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T18:19:13.836 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: AddFile(Keys): cumulative 0, interval 0 2026-03-09T18:19:13.836 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T18:19:13.836 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T18:19:13.836 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T18:19:13.836 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: 2026-03-09T18:19:13.836 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: ** File Read Latency Histogram By Level [default] ** 2026-03-09T18:19:13.836 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: 2026-03-09T18:19:13.836 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.833+0000 7f52a9b11880 4 rocksdb: [db/db_impl/db_impl.cc:447] Shutdown: canceling all background work 2026-03-09T18:19:13.836 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.833+0000 7f52a9b11880 4 rocksdb: [db/db_impl/db_impl.cc:625] Shutdown complete 2026-03-09T18:19:13.836 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph-mon: debug 2026-03-09T18:19:13.833+0000 7f52a9b11880 0 /usr/bin/ceph-mon: created monfs at /var/lib/ceph/mon/ceph-a for mon.a 2026-03-09T18:19:13.873 INFO:teuthology.orchestra.run.vm00.stderr:create mon.a on 2026-03-09T18:19:14.032 INFO:teuthology.orchestra.run.vm00.stderr:systemctl: Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-09T18:19:14.194 INFO:teuthology.orchestra.run.vm00.stderr:systemctl: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965.target → /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965.target. 2026-03-09T18:19:14.194 INFO:teuthology.orchestra.run.vm00.stderr:systemctl: Created symlink /etc/systemd/system/ceph.target.wants/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965.target → /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965.target. 2026-03-09T18:19:14.556 INFO:teuthology.orchestra.run.vm00.stderr:systemctl: Failed to reset failed state of unit ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mon.a.service: Unit ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mon.a.service not loaded. 2026-03-09T18:19:14.562 INFO:teuthology.orchestra.run.vm00.stderr:systemctl: Created symlink /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965.target.wants/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mon.a.service → /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service. 2026-03-09T18:19:14.750 INFO:teuthology.orchestra.run.vm00.stderr:firewalld does not appear to be present 2026-03-09T18:19:14.751 INFO:teuthology.orchestra.run.vm00.stderr:Not possible to enable service . firewalld.service is not available 2026-03-09T18:19:14.753 INFO:teuthology.orchestra.run.vm00.stderr:Waiting for mon to start... 2026-03-09T18:19:14.753 INFO:teuthology.orchestra.run.vm00.stderr:Waiting for mon... 2026-03-09T18:19:15.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:14 vm00 bash[17024]: cluster 2026-03-09T18:19:14.910235+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T18:19:15.165 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: cluster: 2026-03-09T18:19:15.165 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: id: 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:19:15.165 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: health: HEALTH_OK 2026-03-09T18:19:15.165 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: 2026-03-09T18:19:15.165 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: services: 2026-03-09T18:19:15.165 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: mon: 1 daemons, quorum a (age 0.246099s) 2026-03-09T18:19:15.165 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: mgr: no daemons active 2026-03-09T18:19:15.165 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: osd: 0 osds: 0 up, 0 in 2026-03-09T18:19:15.165 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: 2026-03-09T18:19:15.165 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: data: 2026-03-09T18:19:15.165 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: pools: 0 pools, 0 pgs 2026-03-09T18:19:15.165 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: objects: 0 objects, 0 B 2026-03-09T18:19:15.165 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: usage: 0 B used, 0 B / 0 B avail 2026-03-09T18:19:15.165 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: pgs: 2026-03-09T18:19:15.166 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: 2026-03-09T18:19:15.202 INFO:teuthology.orchestra.run.vm00.stderr:mon is available 2026-03-09T18:19:15.202 INFO:teuthology.orchestra.run.vm00.stderr:Assimilating anything we can from ceph.conf... 2026-03-09T18:19:15.409 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: 2026-03-09T18:19:15.409 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: [global] 2026-03-09T18:19:15.409 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: fsid = 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:19:15.409 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: mon_host = [v2:192.168.123.100:3300,v1:192.168.123.100:6789] 2026-03-09T18:19:15.410 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: mon_osd_allow_pg_remap = true 2026-03-09T18:19:15.410 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: mon_osd_allow_primary_affinity = true 2026-03-09T18:19:15.410 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: mon_warn_on_no_sortbitwise = false 2026-03-09T18:19:15.410 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: osd_crush_chooseleaf_type = 0 2026-03-09T18:19:15.410 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: 2026-03-09T18:19:15.410 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: [mgr] 2026-03-09T18:19:15.410 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: mgr/cephadm/use_agent = False 2026-03-09T18:19:15.410 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: mgr/telemetry/nag = false 2026-03-09T18:19:15.410 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: 2026-03-09T18:19:15.410 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: [osd] 2026-03-09T18:19:15.410 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: osd_map_max_advance = 10 2026-03-09T18:19:15.410 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: osd_mclock_iops_capacity_threshold_hdd = 49000 2026-03-09T18:19:15.410 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: osd_sloppy_crc = true 2026-03-09T18:19:15.457 INFO:teuthology.orchestra.run.vm00.stderr:Generating new minimal ceph.conf... 2026-03-09T18:19:15.666 INFO:teuthology.orchestra.run.vm00.stderr:Restarting the monitor... 2026-03-09T18:19:15.678 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 systemd[1]: Stopping Ceph mon.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:19:15.831 INFO:teuthology.orchestra.run.vm00.stderr:Setting mon public_network to 192.168.123.0/24 2026-03-09T18:19:15.960 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17391]: Error response from daemon: No such container: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-mon.a 2026-03-09T18:19:15.960 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17024]: debug 2026-03-09T18:19:15.689+0000 7fb9362e0700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T18:19:15.960 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17024]: debug 2026-03-09T18:19:15.689+0000 7fb9362e0700 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-09T18:19:15.960 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17400]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-mon-a 2026-03-09T18:19:15.960 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17434]: Error response from daemon: No such container: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-mon.a 2026-03-09T18:19:15.960 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mon.a.service: Deactivated successfully. 2026-03-09T18:19:15.960 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 systemd[1]: Stopped Ceph mon.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:19:15.960 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 systemd[1]: Started Ceph mon.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:19:16.116 INFO:teuthology.orchestra.run.vm00.stderr:Wrote config to /etc/ceph/ceph.conf 2026-03-09T18:19:16.116 INFO:teuthology.orchestra.run.vm00.stderr:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-09T18:19:16.116 INFO:teuthology.orchestra.run.vm00.stderr:Creating mgr... 2026-03-09T18:19:16.116 INFO:teuthology.orchestra.run.vm00.stderr:Verifying port 9283 ... 2026-03-09T18:19:16.257 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.957+0000 7fd656b56880 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T18:19:16.257 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.957+0000 7fd656b56880 0 ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable), process ceph-mon, pid 7 2026-03-09T18:19:16.257 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.957+0000 7fd656b56880 0 pidfile_write: ignore empty --pid-file 2026-03-09T18:19:16.257 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 0 load: jerasure load: lrc 2026-03-09T18:19:16.257 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: RocksDB version: 6.15.5 2026-03-09T18:19:16.257 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Git sha rocksdb_build_git_sha:@0@ 2026-03-09T18:19:16.257 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Compile date Apr 18 2022 2026-03-09T18:19:16.257 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: DB SUMMARY 2026-03-09T18:19:16.257 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: DB Session ID: CJCED3B42FXGM29VHTC9 2026-03-09T18:19:16.257 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: CURRENT file: CURRENT 2026-03-09T18:19:16.257 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: IDENTITY file: IDENTITY 2026-03-09T18:19:16.257 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: MANIFEST file: MANIFEST-000009 size: 131 Bytes 2026-03-09T18:19:16.257 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-09T18:19:16.257 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000010.log size: 73715 ; 2026-03-09T18:19:16.257 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.error_if_exists: 0 2026-03-09T18:19:16.257 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.create_if_missing: 0 2026-03-09T18:19:16.257 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T18:19:16.257 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T18:19:16.257 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.env: 0x55ebaa6c6860 2026-03-09T18:19:16.257 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.fs: Posix File System 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.info_log: 0x55ebb8d7fe00 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.statistics: (nil) 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.use_fsync: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.db_log_dir: 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.wal_dir: /var/lib/ceph/mon/ceph-a/store.db 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.write_buffer_manager: 0x55ebb8e70270 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.new_table_reader_for_compaction_inputs: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.unordered_write: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.row_cache: None 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.wal_filter: None 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.preserve_deletes: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.two_write_queues: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.atomic_flush: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T18:19:16.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.max_open_files: -1 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Compression algorithms supported: 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: kZSTD supported: 0 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: kXpressCompression supported: 0 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: kZlibCompression supported: 1 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: [db/version_set.cc:4725] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000009 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: [db/column_family.cc:597] --------------- Options for column family [default]: 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.merge_operator: 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.compaction_filter: None 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ebb8d4dd00) 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: cache_index_and_filter_blocks: 1 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: pin_top_level_index_and_filter: 1 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: index_type: 0 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: data_block_index_type: 0 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: index_shortening: 1 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: data_block_hash_table_util_ratio: 0.750000 2026-03-09T18:19:16.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: hash_index_allow_collision: 1 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: checksum: 1 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: no_block_cache: 0 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: block_cache: 0x55ebb8db4170 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: block_cache_name: BinnedLRUCache 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: block_cache_options: 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: capacity : 536870912 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: num_shard_bits : 4 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: strict_capacity_limit : 0 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: high_pri_pool_ratio: 0.000 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: block_cache_compressed: (nil) 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: persistent_cache: (nil) 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: block_size: 4096 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: block_size_deviation: 10 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: block_restart_interval: 16 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: index_block_restart_interval: 1 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: metadata_block_size: 4096 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: partition_filters: 0 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: use_delta_encoding: 1 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: filter_policy: rocksdb.BuiltinBloomFilter 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: whole_key_filtering: 1 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: verify_compression: 0 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: read_amp_bytes_per_bit: 0 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: format_version: 4 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: enable_index_compression: 1 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: block_align: 0 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.compression: NoCompression 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.num_levels: 7 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T18:19:16.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.arena_block_size: 4194304 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.rate_limit_delay_max_milliseconds: 100 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.961+0000 7fd656b56880 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.965+0000 7fd656b56880 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.965+0000 7fd656b56880 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.965+0000 7fd656b56880 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.965+0000 7fd656b56880 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.965+0000 7fd656b56880 4 rocksdb: Options.table_properties_collectors: 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.965+0000 7fd656b56880 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.965+0000 7fd656b56880 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.965+0000 7fd656b56880 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.965+0000 7fd656b56880 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.965+0000 7fd656b56880 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.965+0000 7fd656b56880 4 rocksdb: Options.bloom_locality: 0 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.965+0000 7fd656b56880 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.965+0000 7fd656b56880 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.965+0000 7fd656b56880 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.965+0000 7fd656b56880 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.965+0000 7fd656b56880 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.965+0000 7fd656b56880 4 rocksdb: Options.ttl: 2592000 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.965+0000 7fd656b56880 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.965+0000 7fd656b56880 4 rocksdb: Options.enable_blob_files: false 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.965+0000 7fd656b56880 4 rocksdb: Options.min_blob_size: 0 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.965+0000 7fd656b56880 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.965+0000 7fd656b56880 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.965+0000 7fd656b56880 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.965+0000 7fd656b56880 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.969+0000 7fd656b56880 4 rocksdb: [db/version_set.cc:4773] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000009 succeeded,manifest_file_number is 9, next_file_number is 11, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.969+0000 7fd656b56880 4 rocksdb: [db/version_set.cc:4782] Column family [default] (ID 0), log number is 5 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.969+0000 7fd656b56880 4 rocksdb: [db/version_set.cc:4083] Creating manifest 13 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.969+0000 7fd656b56880 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773080355973092, "job": 1, "event": "recovery_started", "wal_files": [10]} 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.969+0000 7fd656b56880 4 rocksdb: [db/db_impl/db_impl_open.cc:847] Recovering log #10 mode 2 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.973+0000 7fd656b56880 3 rocksdb: [table/block_based/filter_policy.cc:996] Using legacy Bloom filter with high (20) bits/key. Dramatic filter space and/or accuracy improvement is available with format_version>=5. 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.973+0000 7fd656b56880 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773080355975070, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 14, "file_size": 70687, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 69004, "index_size": 176, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 581, "raw_key_size": 9687, "raw_average_key_size": 49, "raw_value_size": 63573, "raw_average_value_size": 324, "num_data_blocks": 8, "num_entries": 196, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1773080355, "oldest_key_time": 0, "file_creation_time": 0, "db_id": "2e21df9f-6d93-41b1-8998-7924d2dfcd8c", "db_session_id": "CJCED3B42FXGM29VHTC9"}} 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.973+0000 7fd656b56880 4 rocksdb: [db/version_set.cc:4083] Creating manifest 15 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.973+0000 7fd656b56880 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773080355977150, "job": 1, "event": "recovery_finished"} 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.977+0000 7fd656b56880 4 rocksdb: [file/delete_scheduler.cc:73] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000010.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.977+0000 7fd656b56880 4 rocksdb: [db/db_impl/db_impl_open.cc:1701] SstFileManager instance 0x55ebb8d9aa80 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.977+0000 7fd656b56880 4 rocksdb: DB pointer 0x55ebb8daa000 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.977+0000 7fd656b56880 0 starting mon.a rank 0 at public addrs [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] at bind addrs [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon_data /var/lib/ceph/mon/ceph-a fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.977+0000 7fd656b56880 1 mon.a@-1(???) e1 preinit fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.977+0000 7fd656b56880 0 mon.a@-1(???).mds e1 new map 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.977+0000 7fd656b56880 0 mon.a@-1(???).mds e1 print_map 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: e1 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: legacy client fscid: -1 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: No filesystems configured 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.977+0000 7fd656b56880 0 mon.a@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.977+0000 7fd656b56880 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T18:19:16.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.977+0000 7fd656b56880 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T18:19:16.262 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.977+0000 7fd656b56880 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T18:19:16.262 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:15 vm00 bash[17468]: debug 2026-03-09T18:19:15.977+0000 7fd656b56880 1 mon.a@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-09T18:19:16.262 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:16 vm00 bash[17468]: cluster 2026-03-09T18:19:15.984836+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T18:19:16.262 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:16 vm00 bash[17468]: cluster 2026-03-09T18:19:15.984881+0000 mon.a (mon.0) 2 : cluster [DBG] monmap e1: 1 mons at {a=[v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0]} 2026-03-09T18:19:16.262 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:16 vm00 bash[17468]: cluster 2026-03-09T18:19:15.984931+0000 mon.a (mon.0) 3 : cluster [DBG] fsmap 2026-03-09T18:19:16.262 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:16 vm00 bash[17468]: cluster 2026-03-09T18:19:15.984947+0000 mon.a (mon.0) 4 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T18:19:16.262 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:16 vm00 bash[17468]: cluster 2026-03-09T18:19:15.985552+0000 mon.a (mon.0) 5 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T18:19:16.329 INFO:teuthology.orchestra.run.vm00.stderr:systemctl: Failed to reset failed state of unit ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mgr.y.service: Unit ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mgr.y.service not loaded. 2026-03-09T18:19:16.333 INFO:teuthology.orchestra.run.vm00.stderr:systemctl: Created symlink /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965.target.wants/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mgr.y.service → /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service. 2026-03-09T18:19:16.526 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:16 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:19:16.526 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:16 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:19:16.526 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:16 vm00 systemd[1]: Started Ceph mgr.y for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:19:16.527 INFO:teuthology.orchestra.run.vm00.stderr:firewalld does not appear to be present 2026-03-09T18:19:16.527 INFO:teuthology.orchestra.run.vm00.stderr:Not possible to enable service . firewalld.service is not available 2026-03-09T18:19:16.527 INFO:teuthology.orchestra.run.vm00.stderr:firewalld does not appear to be present 2026-03-09T18:19:16.527 INFO:teuthology.orchestra.run.vm00.stderr:Not possible to open ports <[9283]>. firewalld.service is not available 2026-03-09T18:19:16.527 INFO:teuthology.orchestra.run.vm00.stderr:Waiting for mgr to start... 2026-03-09T18:19:16.527 INFO:teuthology.orchestra.run.vm00.stderr:Waiting for mgr... 2026-03-09T18:19:16.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: 2026-03-09T18:19:16.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: { 2026-03-09T18:19:16.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "fsid": "614f4990-1be4-11f1-8b84-dfd1edd9d965", 2026-03-09T18:19:16.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "health": { 2026-03-09T18:19:16.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "status": "HEALTH_OK", 2026-03-09T18:19:16.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "checks": {}, 2026-03-09T18:19:16.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "mutes": [] 2026-03-09T18:19:16.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: }, 2026-03-09T18:19:16.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "election_epoch": 5, 2026-03-09T18:19:16.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "quorum": [ 2026-03-09T18:19:16.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: 0 2026-03-09T18:19:16.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: ], 2026-03-09T18:19:16.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "quorum_names": [ 2026-03-09T18:19:16.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "a" 2026-03-09T18:19:16.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: ], 2026-03-09T18:19:16.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "quorum_age": 0, 2026-03-09T18:19:16.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "monmap": { 2026-03-09T18:19:16.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T18:19:16.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "min_mon_release_name": "quincy", 2026-03-09T18:19:16.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_mons": 1 2026-03-09T18:19:16.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: }, 2026-03-09T18:19:16.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "osdmap": { 2026-03-09T18:19:16.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T18:19:16.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_osds": 0, 2026-03-09T18:19:16.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_up_osds": 0, 2026-03-09T18:19:16.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "osd_up_since": 0, 2026-03-09T18:19:16.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_in_osds": 0, 2026-03-09T18:19:16.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "osd_in_since": 0, 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_remapped_pgs": 0 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: }, 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "pgmap": { 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "pgs_by_state": [], 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_pgs": 0, 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_pools": 0, 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_objects": 0, 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "data_bytes": 0, 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "bytes_used": 0, 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "bytes_avail": 0, 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "bytes_total": 0 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: }, 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "fsmap": { 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "by_rank": [], 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "up:standby": 0 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: }, 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "mgrmap": { 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "available": false, 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_standbys": 0, 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "modules": [ 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "iostat", 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "nfs", 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "restful" 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: ], 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "services": {} 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: }, 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "servicemap": { 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "modified": "2026-03-09T18:19:14.917989+0000", 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "services": {} 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: }, 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "progress_events": {} 2026-03-09T18:19:16.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: } 2026-03-09T18:19:16.777 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:16 vm00 bash[17744]: debug 2026-03-09T18:19:16.769+0000 7f260b028000 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T18:19:16.811 INFO:teuthology.orchestra.run.vm00.stderr:mgr not available, waiting (1/15)... 2026-03-09T18:19:17.073 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:16 vm00 bash[17744]: debug 2026-03-09T18:19:16.837+0000 7f260b028000 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T18:19:17.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:17 vm00 bash[17468]: audit 2026-03-09T18:19:16.071562+0000 mon.a (mon.0) 6 : audit [INF] from='client.? 192.168.123.100:0/2933873712' entity='client.admin' 2026-03-09T18:19:17.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:17 vm00 bash[17468]: audit 2026-03-09T18:19:16.760184+0000 mon.a (mon.0) 7 : audit [DBG] from='client.? 192.168.123.100:0/2082126079' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T18:19:17.384 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:17 vm00 bash[17744]: debug 2026-03-09T18:19:17.189+0000 7f260b028000 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T18:19:18.073 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:17 vm00 bash[17744]: debug 2026-03-09T18:19:17.757+0000 7f260b028000 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T18:19:18.073 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:17 vm00 bash[17744]: debug 2026-03-09T18:19:17.857+0000 7f260b028000 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T18:19:18.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:18 vm00 bash[17744]: debug 2026-03-09T18:19:18.069+0000 7f260b028000 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T18:19:18.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:18 vm00 bash[17744]: debug 2026-03-09T18:19:18.173+0000 7f260b028000 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T18:19:18.380 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:18 vm00 bash[17744]: debug 2026-03-09T18:19:18.229+0000 7f260b028000 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T18:19:18.634 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:18 vm00 bash[17744]: debug 2026-03-09T18:19:18.377+0000 7f260b028000 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T18:19:18.634 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:18 vm00 bash[17744]: debug 2026-03-09T18:19:18.441+0000 7f260b028000 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T18:19:18.634 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:18 vm00 bash[17744]: debug 2026-03-09T18:19:18.513+0000 7f260b028000 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T18:19:19.070 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: { 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "fsid": "614f4990-1be4-11f1-8b84-dfd1edd9d965", 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "health": { 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "status": "HEALTH_OK", 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "checks": {}, 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "mutes": [] 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: }, 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "election_epoch": 5, 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "quorum": [ 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: 0 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: ], 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "quorum_names": [ 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "a" 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: ], 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "quorum_age": 3, 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "monmap": { 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "min_mon_release_name": "quincy", 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_mons": 1 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: }, 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "osdmap": { 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_osds": 0, 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_up_osds": 0, 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "osd_up_since": 0, 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_in_osds": 0, 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "osd_in_since": 0, 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_remapped_pgs": 0 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: }, 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "pgmap": { 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "pgs_by_state": [], 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_pgs": 0, 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_pools": 0, 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_objects": 0, 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "data_bytes": 0, 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "bytes_used": 0, 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "bytes_avail": 0, 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "bytes_total": 0 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: }, 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "fsmap": { 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "by_rank": [], 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "up:standby": 0 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: }, 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "mgrmap": { 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "available": false, 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_standbys": 0, 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "modules": [ 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "iostat", 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "nfs", 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "restful" 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: ], 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "services": {} 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: }, 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "servicemap": { 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T18:19:19.071 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "modified": "2026-03-09T18:19:14.917989+0000", 2026-03-09T18:19:19.072 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "services": {} 2026-03-09T18:19:19.072 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: }, 2026-03-09T18:19:19.072 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "progress_events": {} 2026-03-09T18:19:19.072 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: } 2026-03-09T18:19:19.122 INFO:teuthology.orchestra.run.vm00.stderr:mgr not available, waiting (2/15)... 2026-03-09T18:19:19.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:19 vm00 bash[17468]: audit 2026-03-09T18:19:19.067127+0000 mon.a (mon.0) 8 : audit [DBG] from='client.? 192.168.123.100:0/4293404365' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T18:19:19.384 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:19 vm00 bash[17744]: debug 2026-03-09T18:19:19.097+0000 7f260b028000 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T18:19:19.384 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:19 vm00 bash[17744]: debug 2026-03-09T18:19:19.169+0000 7f260b028000 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T18:19:19.384 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:19 vm00 bash[17744]: debug 2026-03-09T18:19:19.233+0000 7f260b028000 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T18:19:19.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:19 vm00 bash[17744]: debug 2026-03-09T18:19:19.585+0000 7f260b028000 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T18:19:19.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:19 vm00 bash[17744]: debug 2026-03-09T18:19:19.657+0000 7f260b028000 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T18:19:19.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:19 vm00 bash[17744]: debug 2026-03-09T18:19:19.729+0000 7f260b028000 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T18:19:20.254 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:19 vm00 bash[17744]: debug 2026-03-09T18:19:19.893+0000 7f260b028000 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:19:20.569 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:20 vm00 bash[17744]: debug 2026-03-09T18:19:20.249+0000 7f260b028000 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T18:19:20.570 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:20 vm00 bash[17744]: debug 2026-03-09T18:19:20.497+0000 7f260b028000 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T18:19:20.821 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:20 vm00 bash[17744]: debug 2026-03-09T18:19:20.565+0000 7f260b028000 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T18:19:20.822 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:20 vm00 bash[17744]: debug 2026-03-09T18:19:20.649+0000 7f260b028000 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T18:19:20.822 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:20 vm00 bash[17744]: debug 2026-03-09T18:19:20.817+0000 7f260b028000 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:19:21.401 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: 2026-03-09T18:19:21.401 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: { 2026-03-09T18:19:21.401 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "fsid": "614f4990-1be4-11f1-8b84-dfd1edd9d965", 2026-03-09T18:19:21.401 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "health": { 2026-03-09T18:19:21.401 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "status": "HEALTH_OK", 2026-03-09T18:19:21.401 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "checks": {}, 2026-03-09T18:19:21.401 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "mutes": [] 2026-03-09T18:19:21.401 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: }, 2026-03-09T18:19:21.401 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "election_epoch": 5, 2026-03-09T18:19:21.401 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "quorum": [ 2026-03-09T18:19:21.401 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: 0 2026-03-09T18:19:21.401 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: ], 2026-03-09T18:19:21.401 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "quorum_names": [ 2026-03-09T18:19:21.401 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "a" 2026-03-09T18:19:21.401 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: ], 2026-03-09T18:19:21.401 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "quorum_age": 5, 2026-03-09T18:19:21.401 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "monmap": { 2026-03-09T18:19:21.401 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T18:19:21.401 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "min_mon_release_name": "quincy", 2026-03-09T18:19:21.401 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_mons": 1 2026-03-09T18:19:21.401 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: }, 2026-03-09T18:19:21.401 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "osdmap": { 2026-03-09T18:19:21.401 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T18:19:21.401 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_osds": 0, 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_up_osds": 0, 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "osd_up_since": 0, 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_in_osds": 0, 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "osd_in_since": 0, 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_remapped_pgs": 0 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: }, 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "pgmap": { 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "pgs_by_state": [], 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_pgs": 0, 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_pools": 0, 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_objects": 0, 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "data_bytes": 0, 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "bytes_used": 0, 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "bytes_avail": 0, 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "bytes_total": 0 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: }, 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "fsmap": { 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "by_rank": [], 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "up:standby": 0 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: }, 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "mgrmap": { 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "available": false, 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_standbys": 0, 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "modules": [ 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "iostat", 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "nfs", 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "restful" 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: ], 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "services": {} 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: }, 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "servicemap": { 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "modified": "2026-03-09T18:19:14.917989+0000", 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "services": {} 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: }, 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "progress_events": {} 2026-03-09T18:19:21.402 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: } 2026-03-09T18:19:21.449 INFO:teuthology.orchestra.run.vm00.stderr:mgr not available, waiting (3/15)... 2026-03-09T18:19:21.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:21 vm00 bash[17744]: debug 2026-03-09T18:19:21.473+0000 7f260b028000 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T18:19:22.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:22 vm00 bash[17468]: audit 2026-03-09T18:19:21.397121+0000 mon.a (mon.0) 9 : audit [DBG] from='client.? 192.168.123.100:0/3199459519' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T18:19:22.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:22 vm00 bash[17468]: cluster 2026-03-09T18:19:21.474696+0000 mon.a (mon.0) 10 : cluster [INF] Activating manager daemon y 2026-03-09T18:19:22.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:22 vm00 bash[17468]: cluster 2026-03-09T18:19:21.479074+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00450222s) 2026-03-09T18:19:22.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:22 vm00 bash[17468]: audit 2026-03-09T18:19:21.479721+0000 mon.a (mon.0) 12 : audit [DBG] from='mgr.14100 192.168.123.100:0/783518435' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:19:22.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:22 vm00 bash[17468]: audit 2026-03-09T18:19:21.479772+0000 mon.a (mon.0) 13 : audit [DBG] from='mgr.14100 192.168.123.100:0/783518435' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:19:22.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:22 vm00 bash[17468]: audit 2026-03-09T18:19:21.479824+0000 mon.a (mon.0) 14 : audit [DBG] from='mgr.14100 192.168.123.100:0/783518435' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:19:22.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:22 vm00 bash[17468]: audit 2026-03-09T18:19:21.480872+0000 mon.a (mon.0) 15 : audit [DBG] from='mgr.14100 192.168.123.100:0/783518435' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:19:22.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:22 vm00 bash[17468]: audit 2026-03-09T18:19:21.481006+0000 mon.a (mon.0) 16 : audit [DBG] from='mgr.14100 192.168.123.100:0/783518435' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:19:22.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:22 vm00 bash[17468]: cluster 2026-03-09T18:19:21.486129+0000 mon.a (mon.0) 17 : cluster [INF] Manager daemon y is now available 2026-03-09T18:19:22.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:22 vm00 bash[17468]: audit 2026-03-09T18:19:21.495460+0000 mon.a (mon.0) 18 : audit [INF] from='mgr.14100 192.168.123.100:0/783518435' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:19:22.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:22 vm00 bash[17468]: audit 2026-03-09T18:19:21.496436+0000 mon.a (mon.0) 19 : audit [INF] from='mgr.14100 192.168.123.100:0/783518435' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:19:22.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:22 vm00 bash[17468]: audit 2026-03-09T18:19:21.501843+0000 mon.a (mon.0) 20 : audit [INF] from='mgr.14100 192.168.123.100:0/783518435' entity='mgr.y' 2026-03-09T18:19:22.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:22 vm00 bash[17468]: audit 2026-03-09T18:19:21.505458+0000 mon.a (mon.0) 21 : audit [INF] from='mgr.14100 192.168.123.100:0/783518435' entity='mgr.y' 2026-03-09T18:19:22.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:22 vm00 bash[17468]: audit 2026-03-09T18:19:21.508800+0000 mon.a (mon.0) 22 : audit [INF] from='mgr.14100 192.168.123.100:0/783518435' entity='mgr.y' 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: { 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "fsid": "614f4990-1be4-11f1-8b84-dfd1edd9d965", 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "health": { 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "status": "HEALTH_OK", 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "checks": {}, 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "mutes": [] 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: }, 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "election_epoch": 5, 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "quorum": [ 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: 0 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: ], 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "quorum_names": [ 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "a" 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: ], 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "quorum_age": 7, 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "monmap": { 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "min_mon_release_name": "quincy", 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_mons": 1 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: }, 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "osdmap": { 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_osds": 0, 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_up_osds": 0, 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "osd_up_since": 0, 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_in_osds": 0, 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "osd_in_since": 0, 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_remapped_pgs": 0 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: }, 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "pgmap": { 2026-03-09T18:19:23.763 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "pgs_by_state": [], 2026-03-09T18:19:23.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_pgs": 0, 2026-03-09T18:19:23.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_pools": 0, 2026-03-09T18:19:23.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_objects": 0, 2026-03-09T18:19:23.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "data_bytes": 0, 2026-03-09T18:19:23.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "bytes_used": 0, 2026-03-09T18:19:23.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "bytes_avail": 0, 2026-03-09T18:19:23.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "bytes_total": 0 2026-03-09T18:19:23.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: }, 2026-03-09T18:19:23.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "fsmap": { 2026-03-09T18:19:23.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T18:19:23.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "by_rank": [], 2026-03-09T18:19:23.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "up:standby": 0 2026-03-09T18:19:23.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: }, 2026-03-09T18:19:23.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "mgrmap": { 2026-03-09T18:19:23.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "available": true, 2026-03-09T18:19:23.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_standbys": 0, 2026-03-09T18:19:23.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "modules": [ 2026-03-09T18:19:23.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "iostat", 2026-03-09T18:19:23.764 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "nfs", 2026-03-09T18:19:23.765 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "restful" 2026-03-09T18:19:23.765 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: ], 2026-03-09T18:19:23.765 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "services": {} 2026-03-09T18:19:23.765 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: }, 2026-03-09T18:19:23.765 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "servicemap": { 2026-03-09T18:19:23.765 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T18:19:23.765 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "modified": "2026-03-09T18:19:14.917989+0000", 2026-03-09T18:19:23.765 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "services": {} 2026-03-09T18:19:23.765 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: }, 2026-03-09T18:19:23.765 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "progress_events": {} 2026-03-09T18:19:23.765 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: } 2026-03-09T18:19:23.775 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:23 vm00 bash[17468]: cluster 2026-03-09T18:19:22.481223+0000 mon.a (mon.0) 23 : cluster [DBG] mgrmap e3: y(active, since 1.00666s) 2026-03-09T18:19:23.808 INFO:teuthology.orchestra.run.vm00.stderr:mgr is available 2026-03-09T18:19:24.135 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: 2026-03-09T18:19:24.135 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: [global] 2026-03-09T18:19:24.135 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: fsid = 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:19:24.135 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: mon_osd_allow_pg_remap = true 2026-03-09T18:19:24.135 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: mon_osd_allow_primary_affinity = true 2026-03-09T18:19:24.135 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: mon_warn_on_no_sortbitwise = false 2026-03-09T18:19:24.135 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: osd_crush_chooseleaf_type = 0 2026-03-09T18:19:24.135 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: 2026-03-09T18:19:24.135 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: [mgr] 2026-03-09T18:19:24.135 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: mgr/telemetry/nag = false 2026-03-09T18:19:24.135 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: 2026-03-09T18:19:24.135 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: [osd] 2026-03-09T18:19:24.135 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: osd_map_max_advance = 10 2026-03-09T18:19:24.135 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: osd_mclock_iops_capacity_threshold_hdd = 49000 2026-03-09T18:19:24.135 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: osd_sloppy_crc = true 2026-03-09T18:19:24.197 INFO:teuthology.orchestra.run.vm00.stderr:Enabling cephadm module... 2026-03-09T18:19:24.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:24 vm00 bash[17468]: cluster 2026-03-09T18:19:23.483997+0000 mon.a (mon.0) 24 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-09T18:19:24.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:24 vm00 bash[17468]: audit 2026-03-09T18:19:23.760492+0000 mon.a (mon.0) 25 : audit [DBG] from='client.? 192.168.123.100:0/80165702' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T18:19:24.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:24 vm00 bash[17468]: audit 2026-03-09T18:19:24.064975+0000 mon.a (mon.0) 26 : audit [INF] from='client.? 192.168.123.100:0/2769820379' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T18:19:24.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:24 vm00 bash[17468]: audit 2026-03-09T18:19:24.131112+0000 mon.a (mon.0) 27 : audit [INF] from='client.? 192.168.123.100:0/2769820379' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-09T18:19:24.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:24 vm00 bash[17468]: audit 2026-03-09T18:19:24.471352+0000 mon.a (mon.0) 28 : audit [INF] from='client.? 192.168.123.100:0/3124075155' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T18:19:25.484 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: { 2026-03-09T18:19:25.484 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "epoch": 5, 2026-03-09T18:19:25.484 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "available": true, 2026-03-09T18:19:25.484 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "active_name": "y", 2026-03-09T18:19:25.484 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_standby": 0 2026-03-09T18:19:25.484 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: } 2026-03-09T18:19:25.495 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:25 vm00 bash[17744]: ignoring --setuser ceph since I am not root 2026-03-09T18:19:25.495 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:25 vm00 bash[17744]: ignoring --setgroup ceph since I am not root 2026-03-09T18:19:25.495 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:25 vm00 bash[17744]: debug 2026-03-09T18:19:25.301+0000 7f5b9bd07000 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T18:19:25.496 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:25 vm00 bash[17744]: debug 2026-03-09T18:19:25.357+0000 7f5b9bd07000 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T18:19:25.536 INFO:teuthology.orchestra.run.vm00.stderr:Waiting for the mgr to restart... 2026-03-09T18:19:25.537 INFO:teuthology.orchestra.run.vm00.stderr:Waiting for mgr epoch 5... 2026-03-09T18:19:26.133 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:25 vm00 bash[17744]: debug 2026-03-09T18:19:25.749+0000 7f5b9bd07000 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T18:19:26.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:26 vm00 bash[17468]: audit 2026-03-09T18:19:25.133568+0000 mon.a (mon.0) 29 : audit [INF] from='client.? 192.168.123.100:0/3124075155' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T18:19:26.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:26 vm00 bash[17468]: cluster 2026-03-09T18:19:25.134199+0000 mon.a (mon.0) 30 : cluster [DBG] mgrmap e5: y(active, since 3s) 2026-03-09T18:19:26.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:26 vm00 bash[17468]: audit 2026-03-09T18:19:25.482128+0000 mon.a (mon.0) 31 : audit [DBG] from='client.? 192.168.123.100:0/487389164' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T18:19:26.384 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:26 vm00 bash[17744]: debug 2026-03-09T18:19:26.253+0000 7f5b9bd07000 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T18:19:26.384 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:26 vm00 bash[17744]: debug 2026-03-09T18:19:26.349+0000 7f5b9bd07000 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T18:19:26.869 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:26 vm00 bash[17744]: debug 2026-03-09T18:19:26.553+0000 7f5b9bd07000 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T18:19:26.869 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:26 vm00 bash[17744]: debug 2026-03-09T18:19:26.657+0000 7f5b9bd07000 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T18:19:26.869 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:26 vm00 bash[17744]: debug 2026-03-09T18:19:26.717+0000 7f5b9bd07000 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T18:19:27.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:26 vm00 bash[17744]: debug 2026-03-09T18:19:26.865+0000 7f5b9bd07000 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T18:19:27.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:26 vm00 bash[17744]: debug 2026-03-09T18:19:26.933+0000 7f5b9bd07000 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T18:19:27.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:27 vm00 bash[17744]: debug 2026-03-09T18:19:27.005+0000 7f5b9bd07000 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T18:19:27.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:27 vm00 bash[17744]: debug 2026-03-09T18:19:27.553+0000 7f5b9bd07000 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T18:19:27.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:27 vm00 bash[17744]: debug 2026-03-09T18:19:27.613+0000 7f5b9bd07000 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T18:19:27.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:27 vm00 bash[17744]: debug 2026-03-09T18:19:27.669+0000 7f5b9bd07000 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T18:19:28.384 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:27 vm00 bash[17744]: debug 2026-03-09T18:19:27.993+0000 7f5b9bd07000 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T18:19:28.384 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:28 vm00 bash[17744]: debug 2026-03-09T18:19:28.057+0000 7f5b9bd07000 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T18:19:28.384 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:28 vm00 bash[17744]: debug 2026-03-09T18:19:28.121+0000 7f5b9bd07000 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T18:19:28.384 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:28 vm00 bash[17744]: debug 2026-03-09T18:19:28.209+0000 7f5b9bd07000 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:19:28.796 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:28 vm00 bash[17744]: debug 2026-03-09T18:19:28.529+0000 7f5b9bd07000 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T18:19:28.796 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:28 vm00 bash[17744]: debug 2026-03-09T18:19:28.729+0000 7f5b9bd07000 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T18:19:29.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:28 vm00 bash[17744]: debug 2026-03-09T18:19:28.793+0000 7f5b9bd07000 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T18:19:29.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:28 vm00 bash[17744]: debug 2026-03-09T18:19:28.857+0000 7f5b9bd07000 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T18:19:29.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:29 vm00 bash[17744]: debug 2026-03-09T18:19:29.017+0000 7f5b9bd07000 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:19:29.854 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:29 vm00 bash[17468]: cluster 2026-03-09T18:19:29.532244+0000 mon.a (mon.0) 32 : cluster [INF] Active manager daemon y restarted 2026-03-09T18:19:29.854 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:29 vm00 bash[17468]: cluster 2026-03-09T18:19:29.533304+0000 mon.a (mon.0) 33 : cluster [INF] Activating manager daemon y 2026-03-09T18:19:29.854 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:29 vm00 bash[17468]: cluster 2026-03-09T18:19:29.535222+0000 mon.a (mon.0) 34 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T18:19:29.854 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:29 vm00 bash[17744]: debug 2026-03-09T18:19:29.529+0000 7f5b9bd07000 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T18:19:30.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:29 vm00 bash[17744]: [09/Mar/2026:18:19:29] ENGINE Bus STARTING 2026-03-09T18:19:30.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:29 vm00 bash[17744]: [09/Mar/2026:18:19:29] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T18:19:30.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:29 vm00 bash[17744]: [09/Mar/2026:18:19:29] ENGINE Bus STARTED 2026-03-09T18:19:30.620 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: { 2026-03-09T18:19:30.620 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "mgrmap_epoch": 7, 2026-03-09T18:19:30.620 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "initialized": true 2026-03-09T18:19:30.620 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: } 2026-03-09T18:19:30.668 INFO:teuthology.orchestra.run.vm00.stderr:mgr epoch 5 is available 2026-03-09T18:19:30.668 INFO:teuthology.orchestra.run.vm00.stderr:Setting orchestrator backend to cephadm... 2026-03-09T18:19:30.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:30 vm00 bash[17468]: cluster 2026-03-09T18:19:29.587851+0000 mon.a (mon.0) 35 : cluster [DBG] mgrmap e6: y(active, starting, since 0.0547116s) 2026-03-09T18:19:30.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:30 vm00 bash[17468]: audit 2026-03-09T18:19:29.628161+0000 mon.a (mon.0) 36 : audit [DBG] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:19:30.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:30 vm00 bash[17468]: audit 2026-03-09T18:19:29.629149+0000 mon.a (mon.0) 37 : audit [DBG] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:19:30.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:30 vm00 bash[17468]: audit 2026-03-09T18:19:29.630158+0000 mon.a (mon.0) 38 : audit [DBG] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:19:30.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:30 vm00 bash[17468]: audit 2026-03-09T18:19:29.630302+0000 mon.a (mon.0) 39 : audit [DBG] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:19:30.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:30 vm00 bash[17468]: audit 2026-03-09T18:19:29.630386+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:19:30.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:30 vm00 bash[17468]: cluster 2026-03-09T18:19:29.635683+0000 mon.a (mon.0) 41 : cluster [INF] Manager daemon y is now available 2026-03-09T18:19:30.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:30 vm00 bash[17468]: audit 2026-03-09T18:19:29.647969+0000 mon.a (mon.0) 42 : audit [INF] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' 2026-03-09T18:19:30.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:30 vm00 bash[17468]: audit 2026-03-09T18:19:29.650713+0000 mon.a (mon.0) 43 : audit [INF] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' 2026-03-09T18:19:30.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:30 vm00 bash[17468]: audit 2026-03-09T18:19:29.658723+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:19:30.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:30 vm00 bash[17468]: audit 2026-03-09T18:19:29.659378+0000 mon.a (mon.0) 45 : audit [DBG] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:19:30.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:30 vm00 bash[17468]: audit 2026-03-09T18:19:29.661410+0000 mon.a (mon.0) 46 : audit [DBG] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:19:30.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:30 vm00 bash[17468]: audit 2026-03-09T18:19:29.662698+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:19:30.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:30 vm00 bash[17468]: audit 2026-03-09T18:19:29.669940+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:19:30.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:30 vm00 bash[17468]: cephadm 2026-03-09T18:19:29.853613+0000 mgr.y (mgr.14120) 1 : cephadm [INF] [09/Mar/2026:18:19:29] ENGINE Bus STARTING 2026-03-09T18:19:30.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:30 vm00 bash[17468]: cephadm 2026-03-09T18:19:29.964546+0000 mgr.y (mgr.14120) 2 : cephadm [INF] [09/Mar/2026:18:19:29] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T18:19:30.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:30 vm00 bash[17468]: cephadm 2026-03-09T18:19:29.964677+0000 mgr.y (mgr.14120) 3 : cephadm [INF] [09/Mar/2026:18:19:29] ENGINE Bus STARTED 2026-03-09T18:19:30.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:30 vm00 bash[17468]: audit 2026-03-09T18:19:29.968421+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' 2026-03-09T18:19:30.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:30 vm00 bash[17468]: audit 2026-03-09T18:19:29.977186+0000 mon.a (mon.0) 50 : audit [DBG] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:19:31.806 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: value unchanged 2026-03-09T18:19:31.818 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:31 vm00 bash[17468]: cluster 2026-03-09T18:19:30.594865+0000 mon.a (mon.0) 51 : cluster [DBG] mgrmap e7: y(active, since 1.06173s) 2026-03-09T18:19:31.818 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:31 vm00 bash[17468]: audit 2026-03-09T18:19:30.604531+0000 mgr.y (mgr.14120) 4 : audit [DBG] from='client.14124 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T18:19:31.818 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:31 vm00 bash[17468]: audit 2026-03-09T18:19:30.612180+0000 mgr.y (mgr.14120) 5 : audit [DBG] from='client.14124 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T18:19:31.818 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:31 vm00 bash[17468]: audit 2026-03-09T18:19:31.134322+0000 mgr.y (mgr.14120) 6 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:19:31.818 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:31 vm00 bash[17468]: audit 2026-03-09T18:19:31.156515+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' 2026-03-09T18:19:31.818 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:31 vm00 bash[17468]: audit 2026-03-09T18:19:31.195150+0000 mon.a (mon.0) 53 : audit [DBG] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:19:31.857 INFO:teuthology.orchestra.run.vm00.stderr:Generating ssh key... 2026-03-09T18:19:32.464 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXIBHcD0alp5127bFN4bO1tRnJdPN2RTTLx3j/D/CFpx5KHXt7gTW/YMMaS+0l+KbOl+uZ+ZHCvr1DnqwCTtOQvpClt7AQ3pt2bzYKuesEiivugjS80vDnsau5n7dORfPRPYuimKGFqlohKLTQTe0Z6NmR/LFwS3hgIsMVFMKR9kgTxONK0i4wHSsb+jqZhSKMP6em7xnQfskC9vNdncm6Yd04uGpm4Dxj+OWIlBFk0CyLeSxAtuaSP8E9rltvG5vMbnu2BHbCPBAP62gn/Bk2uQa2XugcKAcf0Rbr2DBhj9ndAeOvLTW+YHHM6ygckGu9VTKEmDbuR27jeF06BAhHDazfwTuK1hdfp704P9RHmn/wR5iXxKZ4saysWlAopDlUeEwrs2h0dRrU8/+SOrSAYN4CL9+HYAJgtbDg6K4emHtD3oFI6a9zthYyQ/mjwrAinwEqL2jNzRcAJXdgQ1h3SYK/YDRsEo7znh2k6Yy8yaBxrSF41vBWQBT9bFCLSo8= ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:19:32.476 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:32 vm00 bash[17744]: Generating public/private rsa key pair. 2026-03-09T18:19:32.476 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:32 vm00 bash[17744]: Your identification has been saved in /tmp/tmpk1u2x3v0/key. 2026-03-09T18:19:32.476 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:32 vm00 bash[17744]: Your public key has been saved in /tmp/tmpk1u2x3v0/key.pub. 2026-03-09T18:19:32.476 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:32 vm00 bash[17744]: The key fingerprint is: 2026-03-09T18:19:32.476 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:32 vm00 bash[17744]: SHA256:msKoNWmyzOOB9PiP1pYkkYUJjSy+16yCC7mlrJeQ2tw ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:19:32.476 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:32 vm00 bash[17744]: The key's randomart image is: 2026-03-09T18:19:32.476 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:32 vm00 bash[17744]: +---[RSA 3072]----+ 2026-03-09T18:19:32.476 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:32 vm00 bash[17744]: |..+ o | 2026-03-09T18:19:32.476 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:32 vm00 bash[17744]: |.o + . | 2026-03-09T18:19:32.476 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:32 vm00 bash[17744]: |o o | 2026-03-09T18:19:32.476 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:32 vm00 bash[17744]: | . o | 2026-03-09T18:19:32.476 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:32 vm00 bash[17744]: | o. + S | 2026-03-09T18:19:32.476 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:32 vm00 bash[17744]: |=oo* + o | 2026-03-09T18:19:32.476 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:32 vm00 bash[17744]: |B*O+B + | 2026-03-09T18:19:32.476 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:32 vm00 bash[17744]: |B%B+E= | 2026-03-09T18:19:32.476 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:32 vm00 bash[17744]: |XB+oo. | 2026-03-09T18:19:32.476 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:32 vm00 bash[17744]: +----[SHA256]-----+ 2026-03-09T18:19:32.504 INFO:teuthology.orchestra.run.vm00.stderr:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-09T18:19:32.504 INFO:teuthology.orchestra.run.vm00.stderr:Adding key to root@localhost authorized_keys... 2026-03-09T18:19:32.504 INFO:teuthology.orchestra.run.vm00.stderr:Adding host vm00... 2026-03-09T18:19:33.245 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: Added host 'vm00' with addr '192.168.123.100' 2026-03-09T18:19:33.294 INFO:teuthology.orchestra.run.vm00.stderr:Deploying unmanaged mon service... 2026-03-09T18:19:33.338 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:33 vm00 bash[17468]: audit 2026-03-09T18:19:31.803955+0000 mgr.y (mgr.14120) 7 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:19:33.338 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:33 vm00 bash[17468]: audit 2026-03-09T18:19:32.103969+0000 mgr.y (mgr.14120) 8 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:19:33.338 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:33 vm00 bash[17468]: cephadm 2026-03-09T18:19:32.104209+0000 mgr.y (mgr.14120) 9 : cephadm [INF] Generating ssh key... 2026-03-09T18:19:33.338 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:33 vm00 bash[17468]: cluster 2026-03-09T18:19:32.158470+0000 mon.a (mon.0) 54 : cluster [DBG] mgrmap e8: y(active, since 2s) 2026-03-09T18:19:33.338 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:33 vm00 bash[17468]: audit 2026-03-09T18:19:32.179228+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' 2026-03-09T18:19:33.338 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:33 vm00 bash[17468]: audit 2026-03-09T18:19:32.181110+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' 2026-03-09T18:19:33.560 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: Scheduled mon update... 2026-03-09T18:19:33.603 INFO:teuthology.orchestra.run.vm00.stderr:Deploying unmanaged mgr service... 2026-03-09T18:19:33.860 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: Scheduled mgr update... 2026-03-09T18:19:34.441 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:34 vm00 bash[17468]: audit 2026-03-09T18:19:32.462246+0000 mgr.y (mgr.14120) 10 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:19:34.441 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:34 vm00 bash[17468]: audit 2026-03-09T18:19:32.747405+0000 mgr.y (mgr.14120) 11 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm00", "addr": "192.168.123.100", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:19:34.441 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:34 vm00 bash[17468]: cephadm 2026-03-09T18:19:32.958394+0000 mgr.y (mgr.14120) 12 : cephadm [INF] Deploying cephadm binary to vm00 2026-03-09T18:19:34.441 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:34 vm00 bash[17468]: audit 2026-03-09T18:19:33.241222+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' 2026-03-09T18:19:34.441 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:34 vm00 bash[17468]: audit 2026-03-09T18:19:33.267104+0000 mon.a (mon.0) 58 : audit [DBG] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:19:34.441 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:34 vm00 bash[17468]: audit 2026-03-09T18:19:33.557975+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' 2026-03-09T18:19:34.441 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:34 vm00 bash[17468]: audit 2026-03-09T18:19:33.858430+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' 2026-03-09T18:19:34.441 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:34 vm00 bash[17468]: audit 2026-03-09T18:19:34.138728+0000 mon.a (mon.0) 61 : audit [INF] from='client.? 192.168.123.100:0/3077548982' entity='client.admin' 2026-03-09T18:19:34.469 INFO:teuthology.orchestra.run.vm00.stderr:Enabling the dashboard module... 2026-03-09T18:19:35.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:35 vm00 bash[17468]: cephadm 2026-03-09T18:19:33.241578+0000 mgr.y (mgr.14120) 13 : cephadm [INF] Added host vm00 2026-03-09T18:19:35.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:35 vm00 bash[17468]: audit 2026-03-09T18:19:33.553953+0000 mgr.y (mgr.14120) 14 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:19:35.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:35 vm00 bash[17468]: cephadm 2026-03-09T18:19:33.554870+0000 mgr.y (mgr.14120) 15 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T18:19:35.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:35 vm00 bash[17468]: audit 2026-03-09T18:19:33.855014+0000 mgr.y (mgr.14120) 16 : audit [DBG] from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:19:35.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:35 vm00 bash[17468]: cephadm 2026-03-09T18:19:33.855798+0000 mgr.y (mgr.14120) 17 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T18:19:35.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:35 vm00 bash[17468]: audit 2026-03-09T18:19:34.420444+0000 mon.a (mon.0) 62 : audit [INF] from='client.? 192.168.123.100:0/936272195' entity='client.admin' 2026-03-09T18:19:35.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:35 vm00 bash[17468]: audit 2026-03-09T18:19:34.799613+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' 2026-03-09T18:19:35.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:35 vm00 bash[17468]: audit 2026-03-09T18:19:34.859834+0000 mon.a (mon.0) 64 : audit [INF] from='client.? 192.168.123.100:0/1850030359' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T18:19:35.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:35 vm00 bash[17468]: audit 2026-03-09T18:19:34.962386+0000 mon.a (mon.0) 65 : audit [INF] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' 2026-03-09T18:19:36.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:35 vm00 bash[17744]: ignoring --setuser ceph since I am not root 2026-03-09T18:19:36.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:35 vm00 bash[17744]: ignoring --setgroup ceph since I am not root 2026-03-09T18:19:36.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:35 vm00 bash[17744]: debug 2026-03-09T18:19:35.945+0000 7f564a9d2000 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T18:19:36.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:35 vm00 bash[17744]: debug 2026-03-09T18:19:35.993+0000 7f564a9d2000 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T18:19:36.160 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: { 2026-03-09T18:19:36.160 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "epoch": 9, 2026-03-09T18:19:36.160 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "available": true, 2026-03-09T18:19:36.160 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "active_name": "y", 2026-03-09T18:19:36.160 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "num_standby": 0 2026-03-09T18:19:36.160 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: } 2026-03-09T18:19:36.220 INFO:teuthology.orchestra.run.vm00.stderr:Waiting for the mgr to restart... 2026-03-09T18:19:36.220 INFO:teuthology.orchestra.run.vm00.stderr:Waiting for mgr epoch 9... 2026-03-09T18:19:36.634 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:36 vm00 bash[17744]: debug 2026-03-09T18:19:36.369+0000 7f564a9d2000 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T18:19:37.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:36 vm00 bash[17468]: audit 2026-03-09T18:19:35.805481+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.100:0/1850030359' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T18:19:37.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:36 vm00 bash[17468]: cluster 2026-03-09T18:19:35.805574+0000 mon.a (mon.0) 67 : cluster [DBG] mgrmap e9: y(active, since 6s) 2026-03-09T18:19:37.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:36 vm00 bash[17468]: audit 2026-03-09T18:19:36.158372+0000 mon.a (mon.0) 68 : audit [DBG] from='client.? 192.168.123.100:0/497300659' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T18:19:37.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:36 vm00 bash[17744]: debug 2026-03-09T18:19:36.909+0000 7f564a9d2000 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T18:19:37.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:37 vm00 bash[17744]: debug 2026-03-09T18:19:37.009+0000 7f564a9d2000 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T18:19:37.542 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:37 vm00 bash[17744]: debug 2026-03-09T18:19:37.229+0000 7f564a9d2000 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T18:19:37.542 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:37 vm00 bash[17744]: debug 2026-03-09T18:19:37.341+0000 7f564a9d2000 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T18:19:37.542 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:37 vm00 bash[17744]: debug 2026-03-09T18:19:37.409+0000 7f564a9d2000 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T18:19:37.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:37 vm00 bash[17744]: debug 2026-03-09T18:19:37.537+0000 7f564a9d2000 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T18:19:37.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:37 vm00 bash[17744]: debug 2026-03-09T18:19:37.605+0000 7f564a9d2000 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T18:19:37.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:37 vm00 bash[17744]: debug 2026-03-09T18:19:37.681+0000 7f564a9d2000 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T18:19:38.634 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:38 vm00 bash[17744]: debug 2026-03-09T18:19:38.245+0000 7f564a9d2000 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T18:19:38.634 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:38 vm00 bash[17744]: debug 2026-03-09T18:19:38.305+0000 7f564a9d2000 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T18:19:38.634 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:38 vm00 bash[17744]: debug 2026-03-09T18:19:38.365+0000 7f564a9d2000 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T18:19:39.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:38 vm00 bash[17744]: debug 2026-03-09T18:19:38.725+0000 7f564a9d2000 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T18:19:39.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:38 vm00 bash[17744]: debug 2026-03-09T18:19:38.797+0000 7f564a9d2000 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T18:19:39.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:38 vm00 bash[17744]: debug 2026-03-09T18:19:38.861+0000 7f564a9d2000 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T18:19:39.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:38 vm00 bash[17744]: debug 2026-03-09T18:19:38.957+0000 7f564a9d2000 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:19:39.560 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:39 vm00 bash[17744]: debug 2026-03-09T18:19:39.301+0000 7f564a9d2000 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T18:19:39.560 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:39 vm00 bash[17744]: debug 2026-03-09T18:19:39.493+0000 7f564a9d2000 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T18:19:39.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:39 vm00 bash[17744]: debug 2026-03-09T18:19:39.557+0000 7f564a9d2000 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T18:19:39.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:39 vm00 bash[17744]: debug 2026-03-09T18:19:39.625+0000 7f564a9d2000 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T18:19:39.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:39 vm00 bash[17744]: debug 2026-03-09T18:19:39.785+0000 7f564a9d2000 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:19:40.559 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:40 vm00 bash[17468]: cluster 2026-03-09T18:19:40.307868+0000 mon.a (mon.0) 69 : cluster [INF] Active manager daemon y restarted 2026-03-09T18:19:40.559 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:40 vm00 bash[17468]: cluster 2026-03-09T18:19:40.308720+0000 mon.a (mon.0) 70 : cluster [INF] Activating manager daemon y 2026-03-09T18:19:40.559 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:40 vm00 bash[17468]: cluster 2026-03-09T18:19:40.310796+0000 mon.a (mon.0) 71 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T18:19:40.559 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:40 vm00 bash[17744]: debug 2026-03-09T18:19:40.305+0000 7f564a9d2000 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T18:19:40.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:40 vm00 bash[17744]: [09/Mar/2026:18:19:40] ENGINE Bus STARTING 2026-03-09T18:19:40.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:40 vm00 bash[17744]: [09/Mar/2026:18:19:40] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T18:19:40.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:40 vm00 bash[17744]: [09/Mar/2026:18:19:40] ENGINE Bus STARTED 2026-03-09T18:19:41.390 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: { 2026-03-09T18:19:41.390 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "mgrmap_epoch": 11, 2026-03-09T18:19:41.390 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: "initialized": true 2026-03-09T18:19:41.390 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: } 2026-03-09T18:19:41.445 INFO:teuthology.orchestra.run.vm00.stderr:mgr epoch 9 is available 2026-03-09T18:19:41.446 INFO:teuthology.orchestra.run.vm00.stderr:Generating a dashboard self-signed certificate... 2026-03-09T18:19:41.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:41 vm00 bash[17468]: cluster 2026-03-09T18:19:40.362294+0000 mon.a (mon.0) 72 : cluster [DBG] mgrmap e10: y(active, starting, since 0.0536891s) 2026-03-09T18:19:41.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:41 vm00 bash[17468]: audit 2026-03-09T18:19:40.372626+0000 mon.a (mon.0) 73 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:19:41.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:41 vm00 bash[17468]: audit 2026-03-09T18:19:40.373929+0000 mon.a (mon.0) 74 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:19:41.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:41 vm00 bash[17468]: audit 2026-03-09T18:19:40.374965+0000 mon.a (mon.0) 75 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:19:41.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:41 vm00 bash[17468]: audit 2026-03-09T18:19:40.375249+0000 mon.a (mon.0) 76 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:19:41.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:41 vm00 bash[17468]: audit 2026-03-09T18:19:40.375522+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:19:41.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:41 vm00 bash[17468]: cluster 2026-03-09T18:19:40.381895+0000 mon.a (mon.0) 78 : cluster [INF] Manager daemon y is now available 2026-03-09T18:19:41.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:41 vm00 bash[17468]: audit 2026-03-09T18:19:40.406729+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:19:41.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:41 vm00 bash[17468]: audit 2026-03-09T18:19:40.408080+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:19:41.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:41 vm00 bash[17468]: audit 2026-03-09T18:19:40.428239+0000 mon.a (mon.0) 81 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:19:41.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:41 vm00 bash[17468]: audit 2026-03-09T18:19:40.444792+0000 mon.a (mon.0) 82 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:19:41.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:41 vm00 bash[17468]: cephadm 2026-03-09T18:19:40.577919+0000 mgr.y (mgr.14152) 1 : cephadm [INF] [09/Mar/2026:18:19:40] ENGINE Bus STARTING 2026-03-09T18:19:41.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:41 vm00 bash[17468]: cephadm 2026-03-09T18:19:40.694961+0000 mgr.y (mgr.14152) 2 : cephadm [INF] [09/Mar/2026:18:19:40] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T18:19:41.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:41 vm00 bash[17468]: cephadm 2026-03-09T18:19:40.695140+0000 mgr.y (mgr.14152) 3 : cephadm [INF] [09/Mar/2026:18:19:40] ENGINE Bus STARTED 2026-03-09T18:19:41.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:41 vm00 bash[17468]: audit 2026-03-09T18:19:40.699178+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:41.776 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: Self-signed certificate created 2026-03-09T18:19:41.820 INFO:teuthology.orchestra.run.vm00.stderr:Creating initial admin user... 2026-03-09T18:19:42.231 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: {"username": "admin", "password": "$2b$12$mSZVpPq1NMwJxH8kKhyMVutaqaB9Z1yqLu/wieolhaXKfMTB28gea", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773080382, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-09T18:19:42.271 INFO:teuthology.orchestra.run.vm00.stderr:Fetching dashboard port number... 2026-03-09T18:19:42.529 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: 8443 2026-03-09T18:19:42.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:42 vm00 bash[17468]: cluster 2026-03-09T18:19:41.368893+0000 mon.a (mon.0) 84 : cluster [DBG] mgrmap e11: y(active, since 1.06028s) 2026-03-09T18:19:42.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:42 vm00 bash[17468]: audit 2026-03-09T18:19:41.371569+0000 mgr.y (mgr.14152) 4 : audit [DBG] from='client.14156 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T18:19:42.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:42 vm00 bash[17468]: audit 2026-03-09T18:19:41.387751+0000 mgr.y (mgr.14152) 5 : audit [DBG] from='client.14156 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T18:19:42.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:42 vm00 bash[17468]: audit 2026-03-09T18:19:41.715474+0000 mgr.y (mgr.14152) 6 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:19:42.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:42 vm00 bash[17468]: audit 2026-03-09T18:19:41.770819+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:42.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:42 vm00 bash[17468]: audit 2026-03-09T18:19:41.774335+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:42.540 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:42 vm00 bash[17468]: audit 2026-03-09T18:19:42.228473+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:42.573 INFO:teuthology.orchestra.run.vm00.stderr:firewalld does not appear to be present 2026-03-09T18:19:42.573 INFO:teuthology.orchestra.run.vm00.stderr:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-09T18:19:42.574 INFO:teuthology.orchestra.run.vm00.stderr:Ceph Dashboard is now available at: 2026-03-09T18:19:42.574 INFO:teuthology.orchestra.run.vm00.stderr: 2026-03-09T18:19:42.574 INFO:teuthology.orchestra.run.vm00.stderr: URL: https://vm00.local:8443/ 2026-03-09T18:19:42.574 INFO:teuthology.orchestra.run.vm00.stderr: User: admin 2026-03-09T18:19:42.574 INFO:teuthology.orchestra.run.vm00.stderr: Password: g1avjhfhy4 2026-03-09T18:19:42.574 INFO:teuthology.orchestra.run.vm00.stderr: 2026-03-09T18:19:42.574 INFO:teuthology.orchestra.run.vm00.stderr:Enabling autotune for osd_memory_target 2026-03-09T18:19:43.148 INFO:teuthology.orchestra.run.vm00.stderr:/usr/bin/ceph: set mgr/dashboard/cluster/status 2026-03-09T18:19:43.190 INFO:teuthology.orchestra.run.vm00.stderr:You can access the Ceph CLI with: 2026-03-09T18:19:43.190 INFO:teuthology.orchestra.run.vm00.stderr: 2026-03-09T18:19:43.190 INFO:teuthology.orchestra.run.vm00.stderr: sudo /home/ubuntu/cephtest/cephadm shell --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-09T18:19:43.191 INFO:teuthology.orchestra.run.vm00.stderr: 2026-03-09T18:19:43.191 INFO:teuthology.orchestra.run.vm00.stderr:Please consider enabling telemetry to help improve Ceph: 2026-03-09T18:19:43.191 INFO:teuthology.orchestra.run.vm00.stderr: 2026-03-09T18:19:43.191 INFO:teuthology.orchestra.run.vm00.stderr: ceph telemetry on 2026-03-09T18:19:43.191 INFO:teuthology.orchestra.run.vm00.stderr: 2026-03-09T18:19:43.191 INFO:teuthology.orchestra.run.vm00.stderr:For more information see: 2026-03-09T18:19:43.191 INFO:teuthology.orchestra.run.vm00.stderr: 2026-03-09T18:19:43.191 INFO:teuthology.orchestra.run.vm00.stderr: https://docs.ceph.com/docs/master/mgr/telemetry/ 2026-03-09T18:19:43.191 INFO:teuthology.orchestra.run.vm00.stderr: 2026-03-09T18:19:43.191 INFO:teuthology.orchestra.run.vm00.stderr:Bootstrap complete. 2026-03-09T18:19:43.206 INFO:tasks.cephadm:Fetching config... 2026-03-09T18:19:43.206 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-09T18:19:43.206 DEBUG:teuthology.orchestra.run.vm00:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-09T18:19:43.209 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-09T18:19:43.210 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-09T18:19:43.210 DEBUG:teuthology.orchestra.run.vm00:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-09T18:19:43.254 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-09T18:19:43.254 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-09T18:19:43.254 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/mon.a/keyring of=/dev/stdout 2026-03-09T18:19:43.303 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-09T18:19:43.304 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-09T18:19:43.304 DEBUG:teuthology.orchestra.run.vm00:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-09T18:19:43.350 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-09T18:19:43.350 DEBUG:teuthology.orchestra.run.vm00:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXIBHcD0alp5127bFN4bO1tRnJdPN2RTTLx3j/D/CFpx5KHXt7gTW/YMMaS+0l+KbOl+uZ+ZHCvr1DnqwCTtOQvpClt7AQ3pt2bzYKuesEiivugjS80vDnsau5n7dORfPRPYuimKGFqlohKLTQTe0Z6NmR/LFwS3hgIsMVFMKR9kgTxONK0i4wHSsb+jqZhSKMP6em7xnQfskC9vNdncm6Yd04uGpm4Dxj+OWIlBFk0CyLeSxAtuaSP8E9rltvG5vMbnu2BHbCPBAP62gn/Bk2uQa2XugcKAcf0Rbr2DBhj9ndAeOvLTW+YHHM6ygckGu9VTKEmDbuR27jeF06BAhHDazfwTuK1hdfp704P9RHmn/wR5iXxKZ4saysWlAopDlUeEwrs2h0dRrU8/+SOrSAYN4CL9+HYAJgtbDg6K4emHtD3oFI6a9zthYyQ/mjwrAinwEqL2jNzRcAJXdgQ1h3SYK/YDRsEo7znh2k6Yy8yaBxrSF41vBWQBT9bFCLSo8= ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-09T18:19:43.409 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:43 vm00 bash[17468]: audit 2026-03-09T18:19:42.067726+0000 mgr.y (mgr.14152) 7 : audit [DBG] from='client.14166 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:19:43.409 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:43 vm00 bash[17468]: audit 2026-03-09T18:19:42.527252+0000 mon.a (mon.0) 88 : audit [DBG] from='client.? 192.168.123.100:0/907926925' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T18:19:43.409 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:43 vm00 bash[17468]: audit 2026-03-09T18:19:43.142755+0000 mon.a (mon.0) 89 : audit [INF] from='client.? 192.168.123.100:0/3348319845' entity='client.admin' 2026-03-09T18:19:43.409 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:43 vm00 bash[17468]: cluster 2026-03-09T18:19:43.231560+0000 mon.a (mon.0) 90 : cluster [DBG] mgrmap e12: y(active, since 2s) 2026-03-09T18:19:43.416 INFO:teuthology.orchestra.run.vm00.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXIBHcD0alp5127bFN4bO1tRnJdPN2RTTLx3j/D/CFpx5KHXt7gTW/YMMaS+0l+KbOl+uZ+ZHCvr1DnqwCTtOQvpClt7AQ3pt2bzYKuesEiivugjS80vDnsau5n7dORfPRPYuimKGFqlohKLTQTe0Z6NmR/LFwS3hgIsMVFMKR9kgTxONK0i4wHSsb+jqZhSKMP6em7xnQfskC9vNdncm6Yd04uGpm4Dxj+OWIlBFk0CyLeSxAtuaSP8E9rltvG5vMbnu2BHbCPBAP62gn/Bk2uQa2XugcKAcf0Rbr2DBhj9ndAeOvLTW+YHHM6ygckGu9VTKEmDbuR27jeF06BAhHDazfwTuK1hdfp704P9RHmn/wR5iXxKZ4saysWlAopDlUeEwrs2h0dRrU8/+SOrSAYN4CL9+HYAJgtbDg6K4emHtD3oFI6a9zthYyQ/mjwrAinwEqL2jNzRcAJXdgQ1h3SYK/YDRsEo7znh2k6Yy8yaBxrSF41vBWQBT9bFCLSo8= ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:19:43.423 DEBUG:teuthology.orchestra.run.vm08:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXIBHcD0alp5127bFN4bO1tRnJdPN2RTTLx3j/D/CFpx5KHXt7gTW/YMMaS+0l+KbOl+uZ+ZHCvr1DnqwCTtOQvpClt7AQ3pt2bzYKuesEiivugjS80vDnsau5n7dORfPRPYuimKGFqlohKLTQTe0Z6NmR/LFwS3hgIsMVFMKR9kgTxONK0i4wHSsb+jqZhSKMP6em7xnQfskC9vNdncm6Yd04uGpm4Dxj+OWIlBFk0CyLeSxAtuaSP8E9rltvG5vMbnu2BHbCPBAP62gn/Bk2uQa2XugcKAcf0Rbr2DBhj9ndAeOvLTW+YHHM6ygckGu9VTKEmDbuR27jeF06BAhHDazfwTuK1hdfp704P9RHmn/wR5iXxKZ4saysWlAopDlUeEwrs2h0dRrU8/+SOrSAYN4CL9+HYAJgtbDg6K4emHtD3oFI6a9zthYyQ/mjwrAinwEqL2jNzRcAJXdgQ1h3SYK/YDRsEo7znh2k6Yy8yaBxrSF41vBWQBT9bFCLSo8= ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-09T18:19:43.434 INFO:teuthology.orchestra.run.vm08.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXIBHcD0alp5127bFN4bO1tRnJdPN2RTTLx3j/D/CFpx5KHXt7gTW/YMMaS+0l+KbOl+uZ+ZHCvr1DnqwCTtOQvpClt7AQ3pt2bzYKuesEiivugjS80vDnsau5n7dORfPRPYuimKGFqlohKLTQTe0Z6NmR/LFwS3hgIsMVFMKR9kgTxONK0i4wHSsb+jqZhSKMP6em7xnQfskC9vNdncm6Yd04uGpm4Dxj+OWIlBFk0CyLeSxAtuaSP8E9rltvG5vMbnu2BHbCPBAP62gn/Bk2uQa2XugcKAcf0Rbr2DBhj9ndAeOvLTW+YHHM6ygckGu9VTKEmDbuR27jeF06BAhHDazfwTuK1hdfp704P9RHmn/wR5iXxKZ4saysWlAopDlUeEwrs2h0dRrU8/+SOrSAYN4CL9+HYAJgtbDg6K4emHtD3oFI6a9zthYyQ/mjwrAinwEqL2jNzRcAJXdgQ1h3SYK/YDRsEo7znh2k6Yy8yaBxrSF41vBWQBT9bFCLSo8= ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:19:43.439 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-09T18:19:44.115 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-09T18:19:44.115 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-09T18:19:44.615 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm08 2026-03-09T18:19:44.615 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-09T18:19:44.615 DEBUG:teuthology.orchestra.run.vm08:> dd of=/etc/ceph/ceph.conf 2026-03-09T18:19:44.618 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-09T18:19:44.618 DEBUG:teuthology.orchestra.run.vm08:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:19:44.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:44 vm00 bash[17468]: audit 2026-03-09T18:19:43.532897+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:44.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:44 vm00 bash[17468]: audit 2026-03-09T18:19:43.882550+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:44.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:44 vm00 bash[17468]: audit 2026-03-09T18:19:44.051422+0000 mon.a (mon.0) 93 : audit [INF] from='client.? 192.168.123.100:0/3025050124' entity='client.admin' 2026-03-09T18:19:44.663 INFO:tasks.cephadm:Adding host vm08 to orchestrator... 2026-03-09T18:19:44.663 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph orch host add vm08 2026-03-09T18:19:45.796 INFO:teuthology.orchestra.run.vm00.stdout:Added host 'vm08' with addr '192.168.123.108' 2026-03-09T18:19:45.812 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:45 vm00 bash[17468]: audit 2026-03-09T18:19:44.557135+0000 mgr.y (mgr.14152) 8 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:19:45.812 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:45 vm00 bash[17468]: audit 2026-03-09T18:19:44.560056+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:45.848 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph orch host ls --format=json 2026-03-09T18:19:46.389 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:19:46.389 INFO:teuthology.orchestra.run.vm00.stdout:[{"addr": "192.168.123.100", "hostname": "vm00", "labels": [], "status": ""}, {"addr": "192.168.123.108", "hostname": "vm08", "labels": [], "status": ""}] 2026-03-09T18:19:46.442 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-09T18:19:46.443 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph osd crush tunables default 2026-03-09T18:19:47.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:46 vm00 bash[17468]: audit 2026-03-09T18:19:45.083258+0000 mgr.y (mgr.14152) 9 : audit [DBG] from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm08", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:19:47.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:46 vm00 bash[17468]: cephadm 2026-03-09T18:19:45.451119+0000 mgr.y (mgr.14152) 10 : cephadm [INF] Deploying cephadm binary to vm08 2026-03-09T18:19:47.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:46 vm00 bash[17468]: audit 2026-03-09T18:19:45.792625+0000 mon.a (mon.0) 95 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:47.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:46 vm00 bash[17468]: cephadm 2026-03-09T18:19:45.793106+0000 mgr.y (mgr.14152) 11 : cephadm [INF] Added host vm08 2026-03-09T18:19:48.000 INFO:teuthology.orchestra.run.vm00.stderr:adjusted tunables profile to default 2026-03-09T18:19:48.052 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:47 vm00 bash[17468]: audit 2026-03-09T18:19:46.387229+0000 mgr.y (mgr.14152) 12 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:19:48.052 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:47 vm00 bash[17468]: cluster 2026-03-09T18:19:46.995913+0000 mon.a (mon.0) 96 : cluster [DBG] mgrmap e13: y(active, since 6s) 2026-03-09T18:19:48.052 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:47 vm00 bash[17468]: audit 2026-03-09T18:19:47.045305+0000 mon.a (mon.0) 97 : audit [INF] from='client.? 192.168.123.100:0/764525194' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T18:19:48.052 INFO:tasks.cephadm:Adding mon.a on vm00 2026-03-09T18:19:48.052 INFO:tasks.cephadm:Adding mon.c on vm00 2026-03-09T18:19:48.052 INFO:tasks.cephadm:Adding mon.b on vm08 2026-03-09T18:19:48.052 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph orch apply mon '3;vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm08:192.168.123.108=b' 2026-03-09T18:19:48.584 INFO:teuthology.orchestra.run.vm08.stdout:Scheduled mon update... 2026-03-09T18:19:48.643 DEBUG:teuthology.orchestra.run.vm00:mon.c> sudo journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mon.c.service 2026-03-09T18:19:48.644 DEBUG:teuthology.orchestra.run.vm08:mon.b> sudo journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mon.b.service 2026-03-09T18:19:48.645 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T18:19:48.645 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph mon dump -f json 2026-03-09T18:19:49.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:49 vm00 bash[17468]: audit 2026-03-09T18:19:47.995613+0000 mon.a (mon.0) 98 : audit [INF] from='client.? 192.168.123.100:0/764525194' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T18:19:49.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:49 vm00 bash[17468]: cluster 2026-03-09T18:19:47.995706+0000 mon.a (mon.0) 99 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T18:19:49.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:49 vm00 bash[17468]: audit 2026-03-09T18:19:48.276959+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:49.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:49 vm00 bash[17468]: audit 2026-03-09T18:19:48.278195+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:19:49.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:49 vm00 bash[17468]: audit 2026-03-09T18:19:48.282883+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:49.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:49 vm00 bash[17468]: audit 2026-03-09T18:19:48.283574+0000 mon.a (mon.0) 103 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:19:49.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:49 vm00 bash[17468]: audit 2026-03-09T18:19:48.284434+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:19:49.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:49 vm00 bash[17468]: audit 2026-03-09T18:19:48.285036+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:19:49.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:49 vm00 bash[17468]: cephadm 2026-03-09T18:19:48.286033+0000 mgr.y (mgr.14152) 13 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:19:49.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:49 vm00 bash[17468]: cephadm 2026-03-09T18:19:48.355329+0000 mgr.y (mgr.14152) 14 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:19:49.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:49 vm00 bash[17468]: audit 2026-03-09T18:19:48.419122+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:49.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:49 vm00 bash[17468]: audit 2026-03-09T18:19:48.468278+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:49.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:49 vm00 bash[17468]: audit 2026-03-09T18:19:48.571540+0000 mgr.y (mgr.14152) 15 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm08:192.168.123.108=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:19:49.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:49 vm00 bash[17468]: cephadm 2026-03-09T18:19:48.572890+0000 mgr.y (mgr.14152) 16 : cephadm [INF] Saving service mon spec with placement vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm08:192.168.123.108=b;count:3 2026-03-09T18:19:49.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:49 vm00 bash[17468]: audit 2026-03-09T18:19:48.576294+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:49.788 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-09T18:19:49.788 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":1,"fsid":"614f4990-1be4-11f1-8b84-dfd1edd9d965","modified":"2026-03-09T18:19:13.682656Z","created":"2026-03-09T18:19:13.682656Z","min_mon_release":17,"min_mon_release_name":"quincy","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T18:19:49.791 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 1 2026-03-09T18:19:50.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:50 vm00 bash[17468]: audit 2026-03-09T18:19:49.382044+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:50.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:50 vm00 bash[17468]: audit 2026-03-09T18:19:49.675249+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:50.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:50 vm00 bash[17468]: audit 2026-03-09T18:19:49.786010+0000 mon.a (mon.0) 111 : audit [DBG] from='client.? 192.168.123.108:0/1988171419' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T18:19:50.848 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T18:19:50.848 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph mon dump -f json 2026-03-09T18:19:51.341 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-09T18:19:51.341 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":1,"fsid":"614f4990-1be4-11f1-8b84-dfd1edd9d965","modified":"2026-03-09T18:19:13.682656Z","created":"2026-03-09T18:19:13.682656Z","min_mon_release":17,"min_mon_release_name":"quincy","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T18:19:51.348 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 1 2026-03-09T18:19:52.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:52 vm00 bash[17468]: audit 2026-03-09T18:19:51.339407+0000 mon.a (mon.0) 112 : audit [DBG] from='client.? 192.168.123.108:0/3040639587' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T18:19:52.408 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T18:19:52.408 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph mon dump -f json 2026-03-09T18:19:52.887 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-09T18:19:52.887 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":1,"fsid":"614f4990-1be4-11f1-8b84-dfd1edd9d965","modified":"2026-03-09T18:19:13.682656Z","created":"2026-03-09T18:19:13.682656Z","min_mon_release":17,"min_mon_release_name":"quincy","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T18:19:52.890 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 1 2026-03-09T18:19:52.975 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:52 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:19:52.975 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:52 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:19:53.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:52 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:19:53.384 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:19:52 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:19:53.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:19:53 vm00 bash[22468]: debug 2026-03-09T18:19:53.213+0000 7fa1c5347700 1 mon.c@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-09T18:19:54.095 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T18:19:54.095 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph mon dump -f json 2026-03-09T18:19:54.355 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 systemd[1]: Started Ceph mon.b for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:19:54.532 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.505+0000 7f92b9792880 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T18:19:54.532 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.505+0000 7f92b9792880 0 ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable), process ceph-mon, pid 7 2026-03-09T18:19:54.532 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.505+0000 7f92b9792880 0 pidfile_write: ignore empty --pid-file 2026-03-09T18:19:54.532 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 0 load: jerasure load: lrc 2026-03-09T18:19:54.532 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: RocksDB version: 6.15.5 2026-03-09T18:19:54.532 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Git sha rocksdb_build_git_sha:@0@ 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Compile date Apr 18 2022 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: DB SUMMARY 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: DB Session ID: 69OHYY2VT1HKSUFJ6NC0 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: CURRENT file: CURRENT 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: IDENTITY file: IDENTITY 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: MANIFEST file: MANIFEST-000003 size: 57 Bytes 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-b/store.db dir, Total Num: 0, files: 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-b/store.db: 000004.log size: 511 ; 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.error_if_exists: 0 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.create_if_missing: 0 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.env: 0x55b0d60c6860 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.fs: Posix File System 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.info_log: 0x55b0f631fe00 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.statistics: (nil) 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.use_fsync: 0 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.db_log_dir: 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.wal_dir: /var/lib/ceph/mon/ceph-b/store.db 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.write_buffer_manager: 0x55b0f6410270 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.new_table_reader_for_compaction_inputs: 0 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T18:19:54.533 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.unordered_write: 0 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.row_cache: None 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.wal_filter: None 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.preserve_deletes: 0 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.two_write_queues: 0 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.atomic_flush: 0 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.max_open_files: -1 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Compression algorithms supported: 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: kZSTD supported: 0 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: kXpressCompression supported: 0 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T18:19:54.534 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: kZlibCompression supported: 1 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: [db/version_set.cc:4725] Recovering from manifest file: /var/lib/ceph/mon/ceph-b/store.db/MANIFEST-000003 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: [db/column_family.cc:597] --------------- Options for column family [default]: 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.merge_operator: 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.compaction_filter: None 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0f62edd00) 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cache_index_and_filter_blocks: 1 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: pin_top_level_index_and_filter: 1 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: index_type: 0 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: data_block_index_type: 0 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: index_shortening: 1 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: data_block_hash_table_util_ratio: 0.750000 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: hash_index_allow_collision: 1 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: checksum: 1 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: no_block_cache: 0 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: block_cache: 0x55b0f6354170 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: block_cache_name: BinnedLRUCache 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: block_cache_options: 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: capacity : 536870912 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: num_shard_bits : 4 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: strict_capacity_limit : 0 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: high_pri_pool_ratio: 0.000 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: block_cache_compressed: (nil) 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: persistent_cache: (nil) 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: block_size: 4096 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: block_size_deviation: 10 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: block_restart_interval: 16 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: index_block_restart_interval: 1 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: metadata_block_size: 4096 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: partition_filters: 0 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: use_delta_encoding: 1 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: filter_policy: rocksdb.BuiltinBloomFilter 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: whole_key_filtering: 1 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: verify_compression: 0 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: read_amp_bytes_per_bit: 0 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: format_version: 4 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: enable_index_compression: 1 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: block_align: 0 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.compression: NoCompression 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.num_levels: 7 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T18:19:54.535 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.arena_block_size: 4194304 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.rate_limit_delay_max_milliseconds: 100 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.table_properties_collectors: 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.bloom_locality: 0 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.ttl: 2592000 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.enable_blob_files: false 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.min_blob_size: 0 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.509+0000 7f92b9792880 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T18:19:54.536 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.513+0000 7f92b9792880 4 rocksdb: [db/version_set.cc:4773] Recovered from manifest file:/var/lib/ceph/mon/ceph-b/store.db/MANIFEST-000003 succeeded,manifest_file_number is 3, next_file_number is 5, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-09T18:19:54.537 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.513+0000 7f92b9792880 4 rocksdb: [db/version_set.cc:4782] Column family [default] (ID 0), log number is 0 2026-03-09T18:19:54.537 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.513+0000 7f92b9792880 4 rocksdb: [db/version_set.cc:4083] Creating manifest 7 2026-03-09T18:19:54.537 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.517+0000 7f92b9792880 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773080394522993, "job": 1, "event": "recovery_started", "wal_files": [4]} 2026-03-09T18:19:54.537 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.517+0000 7f92b9792880 4 rocksdb: [db/db_impl/db_impl_open.cc:847] Recovering log #4 mode 2 2026-03-09T18:19:54.537 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.517+0000 7f92b9792880 3 rocksdb: [table/block_based/filter_policy.cc:996] Using legacy Bloom filter with high (20) bits/key. Dramatic filter space and/or accuracy improvement is available with format_version>=5. 2026-03-09T18:19:54.537 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.517+0000 7f92b9792880 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773080394523824, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1540, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1773080394, "oldest_key_time": 0, "file_creation_time": 0, "db_id": "711aa06e-aa52-4f38-9afc-7bd63241c2e3", "db_session_id": "69OHYY2VT1HKSUFJ6NC0"}} 2026-03-09T18:19:54.537 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.517+0000 7f92b9792880 4 rocksdb: [db/version_set.cc:4083] Creating manifest 9 2026-03-09T18:19:54.537 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.521+0000 7f92b9792880 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773080394525984, "job": 1, "event": "recovery_finished"} 2026-03-09T18:19:54.537 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.521+0000 7f92b9792880 4 rocksdb: [file/delete_scheduler.cc:73] Deleted file /var/lib/ceph/mon/ceph-b/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T18:19:54.537 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.521+0000 7f92b9792880 4 rocksdb: [db/db_impl/db_impl_open.cc:1701] SstFileManager instance 0x55b0f633a700 2026-03-09T18:19:54.537 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.521+0000 7f92b9792880 4 rocksdb: DB pointer 0x55b0f63ae000 2026-03-09T18:19:54.537 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.521+0000 7f92b9792880 0 mon.b does not exist in monmap, will attempt to join an existing cluster 2026-03-09T18:19:54.537 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.521+0000 7f92b9792880 0 using public_addr v2:192.168.123.108:0/0 -> [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] 2026-03-09T18:19:54.537 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.521+0000 7f92b9792880 0 starting mon.b rank -1 at public addrs [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] at bind addrs [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon_data /var/lib/ceph/mon/ceph-b fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:19:54.537 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.525+0000 7f92b9792880 1 mon.b@-1(???) e0 preinit fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:19:54.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.529+0000 7f92a951a700 4 rocksdb: [db/db_impl/db_impl.cc:902] ------- DUMPING STATS ------- 2026-03-09T18:19:54.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.529+0000 7f92a951a700 4 rocksdb: [db/db_impl/db_impl.cc:903] 2026-03-09T18:19:54.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: ** DB Stats ** 2026-03-09T18:19:54.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T18:19:54.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T18:19:54.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T18:19:54.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T18:19:54.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T18:19:54.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s 2026-03-09T18:19:54.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T18:19:54.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: ** Compaction Stats [default] ** 2026-03-09T18:19:54.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop 2026-03-09T18:19:54.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T18:19:54.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: L0 1/0 1.50 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.7 0.00 0.00 1 0.001 0 0 2026-03-09T18:19:54.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: Sum 1/0 1.50 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.7 0.00 0.00 1 0.001 0 0 2026-03-09T18:19:54.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.7 0.00 0.00 1 0.001 0 0 2026-03-09T18:19:54.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: ** Compaction Stats [default] ** 2026-03-09T18:19:54.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop 2026-03-09T18:19:54.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.7 0.00 0.00 1 0.001 0 0 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: AddFile(Keys): cumulative 0, interval 0 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: Cumulative compaction: 0.00 GB write, 0.08 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: Interval compaction: 0.00 GB write, 0.08 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: ** File Read Latency Histogram By Level [default] ** 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: ** Compaction Stats [default] ** 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: L0 1/0 1.50 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.7 0.00 0.00 1 0.001 0 0 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: Sum 1/0 1.50 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.7 0.00 0.00 1 0.001 0 0 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: ** Compaction Stats [default] ** 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.7 0.00 0.00 1 0.001 0 0 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: AddFile(Keys): cumulative 0, interval 0 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: Cumulative compaction: 0.00 GB write, 0.07 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: ** File Read Latency Histogram By Level [default] ** 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.549+0000 7f92ac520700 0 mon.b@-1(synchronizing).mds e1 new map 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.549+0000 7f92ac520700 0 mon.b@-1(synchronizing).mds e1 print_map 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: e1 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: legacy client fscid: -1 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: No filesystems configured 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.549+0000 7f92ac520700 1 mon.b@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 2026-03-09T18:19:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.549+0000 7f92ac520700 1 mon.b@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 2026-03-09T18:19:54.977 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.549+0000 7f92ac520700 1 mon.b@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in 2026-03-09T18:19:54.977 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.549+0000 7f92ac520700 1 mon.b@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in 2026-03-09T18:19:54.977 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.549+0000 7f92ac520700 1 mon.b@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in 2026-03-09T18:19:54.977 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.549+0000 7f92ac520700 1 mon.b@-1(synchronizing).osd e4 e4: 0 total, 0 up, 0 in 2026-03-09T18:19:54.977 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.549+0000 7f92ac520700 0 mon.b@-1(synchronizing).osd e4 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-09T18:19:54.977 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.549+0000 7f92ac520700 0 mon.b@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T18:19:54.977 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.549+0000 7f92ac520700 0 mon.b@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T18:19:54.977 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.549+0000 7f92ac520700 0 mon.b@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T18:19:54.977 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:14.917792+0000 mon.a (mon.0) 0 : cluster [INF] mkfs 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:19:54.977 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:14.910235+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T18:19:54.977 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:15.984836+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T18:19:54.977 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:15.984881+0000 mon.a (mon.0) 2 : cluster [DBG] monmap e1: 1 mons at {a=[v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0]} 2026-03-09T18:19:54.977 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:15.984931+0000 mon.a (mon.0) 3 : cluster [DBG] fsmap 2026-03-09T18:19:54.977 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:15.984947+0000 mon.a (mon.0) 4 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T18:19:54.977 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:15.985552+0000 mon.a (mon.0) 5 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T18:19:54.977 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:16.071562+0000 mon.a (mon.0) 6 : audit [INF] from='client.? 192.168.123.100:0/2933873712' entity='client.admin' 2026-03-09T18:19:54.977 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:16.760184+0000 mon.a (mon.0) 7 : audit [DBG] from='client.? 192.168.123.100:0/2082126079' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T18:19:54.977 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:19.067127+0000 mon.a (mon.0) 8 : audit [DBG] from='client.? 192.168.123.100:0/4293404365' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T18:19:54.977 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:21.397121+0000 mon.a (mon.0) 9 : audit [DBG] from='client.? 192.168.123.100:0/3199459519' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T18:19:54.977 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:21.474696+0000 mon.a (mon.0) 10 : cluster [INF] Activating manager daemon y 2026-03-09T18:19:54.977 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:21.479074+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00450222s) 2026-03-09T18:19:54.977 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:21.479721+0000 mon.a (mon.0) 12 : audit [DBG] from='mgr.14100 192.168.123.100:0/783518435' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:21.479772+0000 mon.a (mon.0) 13 : audit [DBG] from='mgr.14100 192.168.123.100:0/783518435' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:21.479824+0000 mon.a (mon.0) 14 : audit [DBG] from='mgr.14100 192.168.123.100:0/783518435' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:21.480872+0000 mon.a (mon.0) 15 : audit [DBG] from='mgr.14100 192.168.123.100:0/783518435' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:21.481006+0000 mon.a (mon.0) 16 : audit [DBG] from='mgr.14100 192.168.123.100:0/783518435' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:21.486129+0000 mon.a (mon.0) 17 : cluster [INF] Manager daemon y is now available 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:21.495460+0000 mon.a (mon.0) 18 : audit [INF] from='mgr.14100 192.168.123.100:0/783518435' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:21.496436+0000 mon.a (mon.0) 19 : audit [INF] from='mgr.14100 192.168.123.100:0/783518435' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:21.501843+0000 mon.a (mon.0) 20 : audit [INF] from='mgr.14100 192.168.123.100:0/783518435' entity='mgr.y' 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:21.505458+0000 mon.a (mon.0) 21 : audit [INF] from='mgr.14100 192.168.123.100:0/783518435' entity='mgr.y' 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:21.508800+0000 mon.a (mon.0) 22 : audit [INF] from='mgr.14100 192.168.123.100:0/783518435' entity='mgr.y' 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:22.481223+0000 mon.a (mon.0) 23 : cluster [DBG] mgrmap e3: y(active, since 1.00666s) 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:23.483997+0000 mon.a (mon.0) 24 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:23.760492+0000 mon.a (mon.0) 25 : audit [DBG] from='client.? 192.168.123.100:0/80165702' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:24.064975+0000 mon.a (mon.0) 26 : audit [INF] from='client.? 192.168.123.100:0/2769820379' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:24.131112+0000 mon.a (mon.0) 27 : audit [INF] from='client.? 192.168.123.100:0/2769820379' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:24.471352+0000 mon.a (mon.0) 28 : audit [INF] from='client.? 192.168.123.100:0/3124075155' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:25.133568+0000 mon.a (mon.0) 29 : audit [INF] from='client.? 192.168.123.100:0/3124075155' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:25.134199+0000 mon.a (mon.0) 30 : cluster [DBG] mgrmap e5: y(active, since 3s) 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:25.482128+0000 mon.a (mon.0) 31 : audit [DBG] from='client.? 192.168.123.100:0/487389164' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:29.532244+0000 mon.a (mon.0) 32 : cluster [INF] Active manager daemon y restarted 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:29.533304+0000 mon.a (mon.0) 33 : cluster [INF] Activating manager daemon y 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:29.535222+0000 mon.a (mon.0) 34 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:29.587851+0000 mon.a (mon.0) 35 : cluster [DBG] mgrmap e6: y(active, starting, since 0.0547116s) 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:29.628161+0000 mon.a (mon.0) 36 : audit [DBG] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:29.629149+0000 mon.a (mon.0) 37 : audit [DBG] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:29.630158+0000 mon.a (mon.0) 38 : audit [DBG] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:29.630302+0000 mon.a (mon.0) 39 : audit [DBG] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:29.630386+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:29.635683+0000 mon.a (mon.0) 41 : cluster [INF] Manager daemon y is now available 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:29.647969+0000 mon.a (mon.0) 42 : audit [INF] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:29.650713+0000 mon.a (mon.0) 43 : audit [INF] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:29.658723+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:29.659378+0000 mon.a (mon.0) 45 : audit [DBG] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:29.661410+0000 mon.a (mon.0) 46 : audit [DBG] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:29.662698+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:29.669940+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cephadm 2026-03-09T18:19:29.853613+0000 mgr.y (mgr.14120) 1 : cephadm [INF] [09/Mar/2026:18:19:29] ENGINE Bus STARTING 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cephadm 2026-03-09T18:19:29.964546+0000 mgr.y (mgr.14120) 2 : cephadm [INF] [09/Mar/2026:18:19:29] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cephadm 2026-03-09T18:19:29.964677+0000 mgr.y (mgr.14120) 3 : cephadm [INF] [09/Mar/2026:18:19:29] ENGINE Bus STARTED 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:29.968421+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:29.977186+0000 mon.a (mon.0) 50 : audit [DBG] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:30.594865+0000 mon.a (mon.0) 51 : cluster [DBG] mgrmap e7: y(active, since 1.06173s) 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:30.604531+0000 mgr.y (mgr.14120) 4 : audit [DBG] from='client.14124 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:30.612180+0000 mgr.y (mgr.14120) 5 : audit [DBG] from='client.14124 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:31.134322+0000 mgr.y (mgr.14120) 6 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:31.156515+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:31.195150+0000 mon.a (mon.0) 53 : audit [DBG] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:31.803955+0000 mgr.y (mgr.14120) 7 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:32.103969+0000 mgr.y (mgr.14120) 8 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cephadm 2026-03-09T18:19:32.104209+0000 mgr.y (mgr.14120) 9 : cephadm [INF] Generating ssh key... 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:32.158470+0000 mon.a (mon.0) 54 : cluster [DBG] mgrmap e8: y(active, since 2s) 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:32.179228+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:32.181110+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:32.462246+0000 mgr.y (mgr.14120) 10 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:32.747405+0000 mgr.y (mgr.14120) 11 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm00", "addr": "192.168.123.100", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cephadm 2026-03-09T18:19:32.958394+0000 mgr.y (mgr.14120) 12 : cephadm [INF] Deploying cephadm binary to vm00 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:33.241222+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' 2026-03-09T18:19:54.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:33.267104+0000 mon.a (mon.0) 58 : audit [DBG] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:33.557975+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:33.858430+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:34.138728+0000 mon.a (mon.0) 61 : audit [INF] from='client.? 192.168.123.100:0/3077548982' entity='client.admin' 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cephadm 2026-03-09T18:19:33.241578+0000 mgr.y (mgr.14120) 13 : cephadm [INF] Added host vm00 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:33.553953+0000 mgr.y (mgr.14120) 14 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cephadm 2026-03-09T18:19:33.554870+0000 mgr.y (mgr.14120) 15 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:33.855014+0000 mgr.y (mgr.14120) 16 : audit [DBG] from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cephadm 2026-03-09T18:19:33.855798+0000 mgr.y (mgr.14120) 17 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:34.420444+0000 mon.a (mon.0) 62 : audit [INF] from='client.? 192.168.123.100:0/936272195' entity='client.admin' 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:34.799613+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:34.859834+0000 mon.a (mon.0) 64 : audit [INF] from='client.? 192.168.123.100:0/1850030359' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:34.962386+0000 mon.a (mon.0) 65 : audit [INF] from='mgr.14120 192.168.123.100:0/1388571509' entity='mgr.y' 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:35.805481+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.100:0/1850030359' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:35.805574+0000 mon.a (mon.0) 67 : cluster [DBG] mgrmap e9: y(active, since 6s) 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:36.158372+0000 mon.a (mon.0) 68 : audit [DBG] from='client.? 192.168.123.100:0/497300659' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:40.307868+0000 mon.a (mon.0) 69 : cluster [INF] Active manager daemon y restarted 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:40.308720+0000 mon.a (mon.0) 70 : cluster [INF] Activating manager daemon y 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:40.310796+0000 mon.a (mon.0) 71 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:40.362294+0000 mon.a (mon.0) 72 : cluster [DBG] mgrmap e10: y(active, starting, since 0.0536891s) 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:40.372626+0000 mon.a (mon.0) 73 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:40.373929+0000 mon.a (mon.0) 74 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:40.374965+0000 mon.a (mon.0) 75 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:40.375249+0000 mon.a (mon.0) 76 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:40.375522+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:40.381895+0000 mon.a (mon.0) 78 : cluster [INF] Manager daemon y is now available 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:40.406729+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:40.408080+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:40.428239+0000 mon.a (mon.0) 81 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:40.444792+0000 mon.a (mon.0) 82 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cephadm 2026-03-09T18:19:40.577919+0000 mgr.y (mgr.14152) 1 : cephadm [INF] [09/Mar/2026:18:19:40] ENGINE Bus STARTING 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cephadm 2026-03-09T18:19:40.694961+0000 mgr.y (mgr.14152) 2 : cephadm [INF] [09/Mar/2026:18:19:40] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cephadm 2026-03-09T18:19:40.695140+0000 mgr.y (mgr.14152) 3 : cephadm [INF] [09/Mar/2026:18:19:40] ENGINE Bus STARTED 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:40.699178+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:41.368893+0000 mon.a (mon.0) 84 : cluster [DBG] mgrmap e11: y(active, since 1.06028s) 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:41.371569+0000 mgr.y (mgr.14152) 4 : audit [DBG] from='client.14156 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:41.387751+0000 mgr.y (mgr.14152) 5 : audit [DBG] from='client.14156 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:41.715474+0000 mgr.y (mgr.14152) 6 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:41.770819+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:41.774335+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:42.228473+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:42.067726+0000 mgr.y (mgr.14152) 7 : audit [DBG] from='client.14166 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:42.527252+0000 mon.a (mon.0) 88 : audit [DBG] from='client.? 192.168.123.100:0/907926925' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:43.142755+0000 mon.a (mon.0) 89 : audit [INF] from='client.? 192.168.123.100:0/3348319845' entity='client.admin' 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:43.231560+0000 mon.a (mon.0) 90 : cluster [DBG] mgrmap e12: y(active, since 2s) 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:43.532897+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:43.882550+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:44.051422+0000 mon.a (mon.0) 93 : audit [INF] from='client.? 192.168.123.100:0/3025050124' entity='client.admin' 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:44.557135+0000 mgr.y (mgr.14152) 8 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:44.560056+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:45.083258+0000 mgr.y (mgr.14152) 9 : audit [DBG] from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm08", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cephadm 2026-03-09T18:19:45.451119+0000 mgr.y (mgr.14152) 10 : cephadm [INF] Deploying cephadm binary to vm08 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:45.792625+0000 mon.a (mon.0) 95 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cephadm 2026-03-09T18:19:45.793106+0000 mgr.y (mgr.14152) 11 : cephadm [INF] Added host vm08 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:46.387229+0000 mgr.y (mgr.14152) 12 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:46.995913+0000 mon.a (mon.0) 96 : cluster [DBG] mgrmap e13: y(active, since 6s) 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:47.045305+0000 mon.a (mon.0) 97 : audit [INF] from='client.? 192.168.123.100:0/764525194' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:47.995613+0000 mon.a (mon.0) 98 : audit [INF] from='client.? 192.168.123.100:0/764525194' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cluster 2026-03-09T18:19:47.995706+0000 mon.a (mon.0) 99 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T18:19:54.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:48.276959+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:48.278195+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:48.282883+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:48.283574+0000 mon.a (mon.0) 103 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:48.284434+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:48.285036+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cephadm 2026-03-09T18:19:48.286033+0000 mgr.y (mgr.14152) 13 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cephadm 2026-03-09T18:19:48.355329+0000 mgr.y (mgr.14152) 14 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:48.419122+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:48.468278+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:48.571540+0000 mgr.y (mgr.14152) 15 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm08:192.168.123.108=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: cephadm 2026-03-09T18:19:48.572890+0000 mgr.y (mgr.14152) 16 : cephadm [INF] Saving service mon spec with placement vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm08:192.168.123.108=b;count:3 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:48.576294+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:49.382044+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:49.675249+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:49.786010+0000 mon.a (mon.0) 111 : audit [DBG] from='client.? 192.168.123.108:0/1988171419' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: audit 2026-03-09T18:19:51.339407+0000 mon.a (mon.0) 112 : audit [DBG] from='client.? 192.168.123.108:0/3040639587' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.553+0000 7f92ac520700 1 mon.b@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.553+0000 7f92ac520700 20 expand_channel_meta expand map: {default=false} 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.553+0000 7f92ac520700 20 expand_channel_meta from 'false' to 'false' 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.557+0000 7f92ac520700 20 expand_channel_meta expanded map: {default=false} 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.557+0000 7f92ac520700 20 expand_channel_meta expand map: {default=info} 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.557+0000 7f92ac520700 20 expand_channel_meta from 'info' to 'info' 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.557+0000 7f92ac520700 20 expand_channel_meta expanded map: {default=info} 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.557+0000 7f92ac520700 20 expand_channel_meta expand map: {default=daemon} 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.557+0000 7f92ac520700 20 expand_channel_meta from 'daemon' to 'daemon' 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.557+0000 7f92ac520700 20 expand_channel_meta expanded map: {default=daemon} 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.557+0000 7f92ac520700 20 expand_channel_meta expand map: {default=debug} 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.557+0000 7f92ac520700 20 expand_channel_meta from 'debug' to 'debug' 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.557+0000 7f92ac520700 20 expand_channel_meta expanded map: {default=debug} 2026-03-09T18:19:54.980 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:19:54 vm08 bash[17774]: debug 2026-03-09T18:19:54.557+0000 7f92ac520700 10 mon.b@-1(synchronizing) e2 handle_conf_change mon_allow_pool_delete,mon_cluster_log_to_file 2026-03-09T18:19:58.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:58 vm00 bash[17468]: cephadm 2026-03-09T18:19:53.076491+0000 mgr.y (mgr.14152) 18 : cephadm [INF] Deploying daemon mon.b on vm08 2026-03-09T18:19:58.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:58 vm00 bash[17468]: audit 2026-03-09T18:19:53.225329+0000 mon.a (mon.0) 124 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:19:58.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:58 vm00 bash[17468]: cluster 2026-03-09T18:19:53.225861+0000 mon.a (mon.0) 125 : cluster [INF] mon.a calling monitor election 2026-03-09T18:19:58.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:58 vm00 bash[17468]: audit 2026-03-09T18:19:53.228431+0000 mon.a (mon.0) 126 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:19:58.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:58 vm00 bash[17468]: audit 2026-03-09T18:19:54.219724+0000 mon.a (mon.0) 127 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:19:58.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:58 vm00 bash[17468]: audit 2026-03-09T18:19:54.562639+0000 mon.a (mon.0) 128 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:19:58.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:58 vm00 bash[17468]: audit 2026-03-09T18:19:55.219469+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:19:58.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:58 vm00 bash[17468]: cluster 2026-03-09T18:19:55.221677+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T18:19:58.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:58 vm00 bash[17468]: audit 2026-03-09T18:19:55.562713+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:19:58.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:58 vm00 bash[17468]: audit 2026-03-09T18:19:56.219723+0000 mon.a (mon.0) 131 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:19:58.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:58 vm00 bash[17468]: audit 2026-03-09T18:19:56.562845+0000 mon.a (mon.0) 132 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:19:58.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:58 vm00 bash[17468]: audit 2026-03-09T18:19:57.219774+0000 mon.a (mon.0) 133 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:19:58.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:58 vm00 bash[17468]: audit 2026-03-09T18:19:57.562676+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:19:58.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:58 vm00 bash[17468]: audit 2026-03-09T18:19:58.220116+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:19:58.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:58 vm00 bash[17468]: cluster 2026-03-09T18:19:58.230967+0000 mon.a (mon.0) 136 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-09T18:19:58.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:58 vm00 bash[17468]: cluster 2026-03-09T18:19:58.235128+0000 mon.a (mon.0) 137 : cluster [DBG] monmap e2: 2 mons at {a=[v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0],c=[v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0]} 2026-03-09T18:19:58.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:58 vm00 bash[17468]: cluster 2026-03-09T18:19:58.235224+0000 mon.a (mon.0) 138 : cluster [DBG] fsmap 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:58 vm00 bash[17468]: cluster 2026-03-09T18:19:58.235301+0000 mon.a (mon.0) 139 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:58 vm00 bash[17468]: cluster 2026-03-09T18:19:58.235535+0000 mon.a (mon.0) 140 : cluster [DBG] mgrmap e13: y(active, since 17s) 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:58 vm00 bash[17468]: cluster 2026-03-09T18:19:58.240191+0000 mon.a (mon.0) 141 : cluster [INF] overall HEALTH_OK 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:58 vm00 bash[17468]: audit 2026-03-09T18:19:58.243482+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:58 vm00 bash[17468]: audit 2026-03-09T18:19:58.244476+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:58 vm00 bash[17468]: audit 2026-03-09T18:19:58.245915+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:19:58 vm00 bash[17468]: audit 2026-03-09T18:19:58.246634+0000 mon.a (mon.0) 145 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:19:58 vm00 bash[22468]: cephadm 2026-03-09T18:19:53.076491+0000 mgr.y (mgr.14152) 18 : cephadm [INF] Deploying daemon mon.b on vm08 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:19:58 vm00 bash[22468]: audit 2026-03-09T18:19:53.225329+0000 mon.a (mon.0) 124 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:19:58 vm00 bash[22468]: cluster 2026-03-09T18:19:53.225861+0000 mon.a (mon.0) 125 : cluster [INF] mon.a calling monitor election 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:19:58 vm00 bash[22468]: audit 2026-03-09T18:19:53.228431+0000 mon.a (mon.0) 126 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:19:58 vm00 bash[22468]: audit 2026-03-09T18:19:54.219724+0000 mon.a (mon.0) 127 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:19:58 vm00 bash[22468]: audit 2026-03-09T18:19:54.562639+0000 mon.a (mon.0) 128 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:19:58 vm00 bash[22468]: audit 2026-03-09T18:19:55.219469+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:19:58 vm00 bash[22468]: cluster 2026-03-09T18:19:55.221677+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:19:58 vm00 bash[22468]: audit 2026-03-09T18:19:55.562713+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:19:58 vm00 bash[22468]: audit 2026-03-09T18:19:56.219723+0000 mon.a (mon.0) 131 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:19:58 vm00 bash[22468]: audit 2026-03-09T18:19:56.562845+0000 mon.a (mon.0) 132 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:19:58 vm00 bash[22468]: audit 2026-03-09T18:19:57.219774+0000 mon.a (mon.0) 133 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:19:58 vm00 bash[22468]: audit 2026-03-09T18:19:57.562676+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:19:58 vm00 bash[22468]: audit 2026-03-09T18:19:58.220116+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:19:58 vm00 bash[22468]: cluster 2026-03-09T18:19:58.230967+0000 mon.a (mon.0) 136 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:19:58 vm00 bash[22468]: cluster 2026-03-09T18:19:58.235128+0000 mon.a (mon.0) 137 : cluster [DBG] monmap e2: 2 mons at {a=[v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0],c=[v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0]} 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:19:58 vm00 bash[22468]: cluster 2026-03-09T18:19:58.235224+0000 mon.a (mon.0) 138 : cluster [DBG] fsmap 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:19:58 vm00 bash[22468]: cluster 2026-03-09T18:19:58.235301+0000 mon.a (mon.0) 139 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:19:58 vm00 bash[22468]: cluster 2026-03-09T18:19:58.235535+0000 mon.a (mon.0) 140 : cluster [DBG] mgrmap e13: y(active, since 17s) 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:19:58 vm00 bash[22468]: cluster 2026-03-09T18:19:58.240191+0000 mon.a (mon.0) 141 : cluster [INF] overall HEALTH_OK 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:19:58 vm00 bash[22468]: audit 2026-03-09T18:19:58.243482+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:19:58 vm00 bash[22468]: audit 2026-03-09T18:19:58.244476+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:19:58 vm00 bash[22468]: audit 2026-03-09T18:19:58.245915+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:19:58.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:19:58 vm00 bash[22468]: audit 2026-03-09T18:19:58.246634+0000 mon.a (mon.0) 145 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:03 vm00 bash[22468]: audit 2026-03-09T18:19:58.614717+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:03 vm00 bash[22468]: cluster 2026-03-09T18:19:58.615320+0000 mon.a (mon.0) 148 : cluster [INF] mon.a calling monitor election 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:03 vm00 bash[22468]: cluster 2026-03-09T18:19:58.628545+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:03 vm00 bash[22468]: audit 2026-03-09T18:19:58.628565+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:03 vm00 bash[22468]: audit 2026-03-09T18:19:58.628764+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:03 vm00 bash[22468]: audit 2026-03-09T18:19:59.563129+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:03 vm00 bash[22468]: cluster 2026-03-09T18:20:00.376056+0000 mgr.y (mgr.14152) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:03 vm00 bash[22468]: audit 2026-03-09T18:20:00.563262+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:03 vm00 bash[22468]: cluster 2026-03-09T18:20:00.568180+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:03 vm00 bash[22468]: audit 2026-03-09T18:20:01.563052+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:03 vm00 bash[22468]: cluster 2026-03-09T18:20:02.376343+0000 mgr.y (mgr.14152) 20 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:03 vm00 bash[22468]: audit 2026-03-09T18:20:02.563307+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:03 vm00 bash[22468]: audit 2026-03-09T18:20:03.563320+0000 mon.a (mon.0) 155 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:03 vm00 bash[22468]: cluster 2026-03-09T18:20:03.631516+0000 mon.a (mon.0) 156 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:03 vm00 bash[22468]: cluster 2026-03-09T18:20:03.635248+0000 mon.a (mon.0) 157 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0],b=[v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0],c=[v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0]} 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:03 vm00 bash[22468]: cluster 2026-03-09T18:20:03.635346+0000 mon.a (mon.0) 158 : cluster [DBG] fsmap 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:03 vm00 bash[22468]: cluster 2026-03-09T18:20:03.635422+0000 mon.a (mon.0) 159 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:03 vm00 bash[22468]: cluster 2026-03-09T18:20:03.635641+0000 mon.a (mon.0) 160 : cluster [DBG] mgrmap e13: y(active, since 23s) 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:03 vm00 bash[22468]: cluster 2026-03-09T18:20:03.641088+0000 mon.a (mon.0) 161 : cluster [INF] overall HEALTH_OK 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:03 vm00 bash[22468]: audit 2026-03-09T18:20:03.644923+0000 mon.a (mon.0) 162 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:03 vm00 bash[22468]: audit 2026-03-09T18:20:03.648797+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:03 vm00 bash[22468]: audit 2026-03-09T18:20:03.656632+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:03 vm00 bash[17468]: audit 2026-03-09T18:19:58.614717+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:03 vm00 bash[17468]: cluster 2026-03-09T18:19:58.615320+0000 mon.a (mon.0) 148 : cluster [INF] mon.a calling monitor election 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:03 vm00 bash[17468]: cluster 2026-03-09T18:19:58.628545+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:03 vm00 bash[17468]: audit 2026-03-09T18:19:58.628565+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:03 vm00 bash[17468]: audit 2026-03-09T18:19:58.628764+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:03 vm00 bash[17468]: audit 2026-03-09T18:19:59.563129+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:03 vm00 bash[17468]: cluster 2026-03-09T18:20:00.376056+0000 mgr.y (mgr.14152) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:03 vm00 bash[17468]: audit 2026-03-09T18:20:00.563262+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:03 vm00 bash[17468]: cluster 2026-03-09T18:20:00.568180+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:03 vm00 bash[17468]: audit 2026-03-09T18:20:01.563052+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:03 vm00 bash[17468]: cluster 2026-03-09T18:20:02.376343+0000 mgr.y (mgr.14152) 20 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:03 vm00 bash[17468]: audit 2026-03-09T18:20:02.563307+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:03 vm00 bash[17468]: audit 2026-03-09T18:20:03.563320+0000 mon.a (mon.0) 155 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:03 vm00 bash[17468]: cluster 2026-03-09T18:20:03.631516+0000 mon.a (mon.0) 156 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:03 vm00 bash[17468]: cluster 2026-03-09T18:20:03.635248+0000 mon.a (mon.0) 157 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0],b=[v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0],c=[v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0]} 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:03 vm00 bash[17468]: cluster 2026-03-09T18:20:03.635346+0000 mon.a (mon.0) 158 : cluster [DBG] fsmap 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:03 vm00 bash[17468]: cluster 2026-03-09T18:20:03.635422+0000 mon.a (mon.0) 159 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:03 vm00 bash[17468]: cluster 2026-03-09T18:20:03.635641+0000 mon.a (mon.0) 160 : cluster [DBG] mgrmap e13: y(active, since 23s) 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:03 vm00 bash[17468]: cluster 2026-03-09T18:20:03.641088+0000 mon.a (mon.0) 161 : cluster [INF] overall HEALTH_OK 2026-03-09T18:20:03.927 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:03 vm00 bash[17468]: audit 2026-03-09T18:20:03.644923+0000 mon.a (mon.0) 162 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:03.928 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:03 vm00 bash[17468]: audit 2026-03-09T18:20:03.648797+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:03.928 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:03 vm00 bash[17468]: audit 2026-03-09T18:20:03.656632+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:03.967 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-09T18:20:03.967 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":3,"fsid":"614f4990-1be4-11f1-8b84-dfd1edd9d965","modified":"2026-03-09T18:19:58.564594Z","created":"2026-03-09T18:19:13.682656Z","min_mon_release":17,"min_mon_release_name":"quincy","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3301","nonce":0},{"type":"v1","addr":"192.168.123.100:6790","nonce":0}]},"addr":"192.168.123.100:6790/0","public_addr":"192.168.123.100:6790/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:3300","nonce":0},{"type":"v1","addr":"192.168.123.108:6789","nonce":0}]},"addr":"192.168.123.108:6789/0","public_addr":"192.168.123.108:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]} 2026-03-09T18:20:03.970 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 3 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: cephadm 2026-03-09T18:19:53.076491+0000 mgr.y (mgr.14152) 18 : cephadm [INF] Deploying daemon mon.b on vm08 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: audit 2026-03-09T18:19:53.225329+0000 mon.a (mon.0) 124 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: cluster 2026-03-09T18:19:53.225861+0000 mon.a (mon.0) 125 : cluster [INF] mon.a calling monitor election 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: audit 2026-03-09T18:19:53.228431+0000 mon.a (mon.0) 126 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: audit 2026-03-09T18:19:54.219724+0000 mon.a (mon.0) 127 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: audit 2026-03-09T18:19:54.562639+0000 mon.a (mon.0) 128 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: audit 2026-03-09T18:19:55.219469+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: cluster 2026-03-09T18:19:55.221677+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: audit 2026-03-09T18:19:55.562713+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: audit 2026-03-09T18:19:56.219723+0000 mon.a (mon.0) 131 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: audit 2026-03-09T18:19:56.562845+0000 mon.a (mon.0) 132 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: audit 2026-03-09T18:19:57.219774+0000 mon.a (mon.0) 133 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: audit 2026-03-09T18:19:57.562676+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: audit 2026-03-09T18:19:58.220116+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: cluster 2026-03-09T18:19:58.230967+0000 mon.a (mon.0) 136 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: cluster 2026-03-09T18:19:58.235128+0000 mon.a (mon.0) 137 : cluster [DBG] monmap e2: 2 mons at {a=[v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0],c=[v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0]} 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: cluster 2026-03-09T18:19:58.235224+0000 mon.a (mon.0) 138 : cluster [DBG] fsmap 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: cluster 2026-03-09T18:19:58.235301+0000 mon.a (mon.0) 139 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: cluster 2026-03-09T18:19:58.235535+0000 mon.a (mon.0) 140 : cluster [DBG] mgrmap e13: y(active, since 17s) 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: cluster 2026-03-09T18:19:58.240191+0000 mon.a (mon.0) 141 : cluster [INF] overall HEALTH_OK 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: audit 2026-03-09T18:19:58.243482+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: audit 2026-03-09T18:19:58.244476+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: audit 2026-03-09T18:19:58.245915+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: audit 2026-03-09T18:19:58.246634+0000 mon.a (mon.0) 145 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: audit 2026-03-09T18:19:58.614717+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: cluster 2026-03-09T18:19:58.615320+0000 mon.a (mon.0) 148 : cluster [INF] mon.a calling monitor election 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: cluster 2026-03-09T18:19:58.628545+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: audit 2026-03-09T18:19:58.628565+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: audit 2026-03-09T18:19:58.628764+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: audit 2026-03-09T18:19:59.563129+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: cluster 2026-03-09T18:20:00.376056+0000 mgr.y (mgr.14152) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: audit 2026-03-09T18:20:00.563262+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: cluster 2026-03-09T18:20:00.568180+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: audit 2026-03-09T18:20:01.563052+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: cluster 2026-03-09T18:20:02.376343+0000 mgr.y (mgr.14152) 20 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: audit 2026-03-09T18:20:02.563307+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: audit 2026-03-09T18:20:03.563320+0000 mon.a (mon.0) 155 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: cluster 2026-03-09T18:20:03.631516+0000 mon.a (mon.0) 156 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: cluster 2026-03-09T18:20:03.635248+0000 mon.a (mon.0) 157 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0],b=[v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0],c=[v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0]} 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: cluster 2026-03-09T18:20:03.635346+0000 mon.a (mon.0) 158 : cluster [DBG] fsmap 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: cluster 2026-03-09T18:20:03.635422+0000 mon.a (mon.0) 159 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: cluster 2026-03-09T18:20:03.635641+0000 mon.a (mon.0) 160 : cluster [DBG] mgrmap e13: y(active, since 23s) 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: cluster 2026-03-09T18:20:03.641088+0000 mon.a (mon.0) 161 : cluster [INF] overall HEALTH_OK 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: audit 2026-03-09T18:20:03.644923+0000 mon.a (mon.0) 162 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: audit 2026-03-09T18:20:03.648797+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:03 vm08 bash[17774]: audit 2026-03-09T18:20:03.656632+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:04.037 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-09T18:20:04.037 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph config generate-minimal-conf 2026-03-09T18:20:04.614 INFO:teuthology.orchestra.run.vm00.stdout:# minimal ceph.conf for 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:20:04.614 INFO:teuthology.orchestra.run.vm00.stdout:[global] 2026-03-09T18:20:04.614 INFO:teuthology.orchestra.run.vm00.stdout: fsid = 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:20:04.614 INFO:teuthology.orchestra.run.vm00.stdout: mon_host = [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] 2026-03-09T18:20:04.716 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-09T18:20:04.717 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-09T18:20:04.717 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T18:20:04.726 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-09T18:20:04.726 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:20:04.777 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-09T18:20:04.777 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T18:20:04.784 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-09T18:20:04.784 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:20:04.835 INFO:tasks.cephadm:Adding mgr.y on vm00 2026-03-09T18:20:04.835 INFO:tasks.cephadm:Adding mgr.x on vm08 2026-03-09T18:20:04.835 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph orch apply mgr '2;vm00=y;vm08=x' 2026-03-09T18:20:04.923 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:04 vm00 bash[22468]: cephadm 2026-03-09T18:20:03.649574+0000 mgr.y (mgr.14152) 21 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T18:20:04.923 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:04 vm00 bash[22468]: cephadm 2026-03-09T18:20:03.657181+0000 mgr.y (mgr.14152) 22 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:20:04.923 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:04 vm00 bash[22468]: cephadm 2026-03-09T18:20:03.733877+0000 mgr.y (mgr.14152) 23 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:20:04.923 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:04 vm00 bash[22468]: audit 2026-03-09T18:20:03.742713+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:04.923 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:04 vm00 bash[22468]: audit 2026-03-09T18:20:03.796845+0000 mon.a (mon.0) 166 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:04.923 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:04 vm00 bash[22468]: audit 2026-03-09T18:20:03.804224+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:04 vm00 bash[22468]: cephadm 2026-03-09T18:20:03.805024+0000 mgr.y (mgr.14152) 24 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:04 vm00 bash[22468]: audit 2026-03-09T18:20:03.805560+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:04 vm00 bash[22468]: audit 2026-03-09T18:20:03.806275+0000 mon.a (mon.0) 169 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:04 vm00 bash[22468]: audit 2026-03-09T18:20:03.806928+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:04 vm00 bash[22468]: cephadm 2026-03-09T18:20:03.807622+0000 mgr.y (mgr.14152) 25 : cephadm [INF] Reconfiguring daemon mon.c on vm00 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:04 vm00 bash[22468]: audit 2026-03-09T18:20:03.965264+0000 mon.a (mon.0) 171 : audit [DBG] from='client.? 192.168.123.108:0/3521809177' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:04 vm00 bash[22468]: audit 2026-03-09T18:20:04.051882+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:04 vm00 bash[22468]: audit 2026-03-09T18:20:04.053114+0000 mon.a (mon.0) 173 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:04 vm00 bash[22468]: audit 2026-03-09T18:20:04.053988+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:04 vm00 bash[22468]: audit 2026-03-09T18:20:04.054760+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:04 vm00 bash[22468]: audit 2026-03-09T18:20:04.388373+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:04 vm00 bash[22468]: audit 2026-03-09T18:20:04.389731+0000 mon.a (mon.0) 177 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:04 vm00 bash[22468]: audit 2026-03-09T18:20:04.390628+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:04 vm00 bash[22468]: audit 2026-03-09T18:20:04.391712+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:04 vm00 bash[22468]: audit 2026-03-09T18:20:04.563348+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:04 vm00 bash[22468]: audit 2026-03-09T18:20:04.612657+0000 mon.a (mon.0) 181 : audit [DBG] from='client.? 192.168.123.100:0/2967410513' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:04 vm00 bash[22468]: audit 2026-03-09T18:20:04.625050+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:04 vm00 bash[22468]: audit 2026-03-09T18:20:04.626075+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:04 vm00 bash[22468]: audit 2026-03-09T18:20:04.628725+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:04 vm00 bash[22468]: audit 2026-03-09T18:20:04.629796+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:04 vm00 bash[22468]: audit 2026-03-09T18:20:04.718525+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:04 vm00 bash[22468]: audit 2026-03-09T18:20:04.736032+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:04 vm00 bash[17468]: cephadm 2026-03-09T18:20:03.649574+0000 mgr.y (mgr.14152) 21 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:04 vm00 bash[17468]: cephadm 2026-03-09T18:20:03.657181+0000 mgr.y (mgr.14152) 22 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:04 vm00 bash[17468]: cephadm 2026-03-09T18:20:03.733877+0000 mgr.y (mgr.14152) 23 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:04 vm00 bash[17468]: audit 2026-03-09T18:20:03.742713+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:04 vm00 bash[17468]: audit 2026-03-09T18:20:03.796845+0000 mon.a (mon.0) 166 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:04 vm00 bash[17468]: audit 2026-03-09T18:20:03.804224+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:04 vm00 bash[17468]: cephadm 2026-03-09T18:20:03.805024+0000 mgr.y (mgr.14152) 24 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:04 vm00 bash[17468]: audit 2026-03-09T18:20:03.805560+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:04 vm00 bash[17468]: audit 2026-03-09T18:20:03.806275+0000 mon.a (mon.0) 169 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:04 vm00 bash[17468]: audit 2026-03-09T18:20:03.806928+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:04 vm00 bash[17468]: cephadm 2026-03-09T18:20:03.807622+0000 mgr.y (mgr.14152) 25 : cephadm [INF] Reconfiguring daemon mon.c on vm00 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:04 vm00 bash[17468]: audit 2026-03-09T18:20:03.965264+0000 mon.a (mon.0) 171 : audit [DBG] from='client.? 192.168.123.108:0/3521809177' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:04 vm00 bash[17468]: audit 2026-03-09T18:20:04.051882+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:04 vm00 bash[17468]: audit 2026-03-09T18:20:04.053114+0000 mon.a (mon.0) 173 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:04 vm00 bash[17468]: audit 2026-03-09T18:20:04.053988+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:04 vm00 bash[17468]: audit 2026-03-09T18:20:04.054760+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:04 vm00 bash[17468]: audit 2026-03-09T18:20:04.388373+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:04 vm00 bash[17468]: audit 2026-03-09T18:20:04.389731+0000 mon.a (mon.0) 177 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:04 vm00 bash[17468]: audit 2026-03-09T18:20:04.390628+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:04 vm00 bash[17468]: audit 2026-03-09T18:20:04.391712+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:04 vm00 bash[17468]: audit 2026-03-09T18:20:04.563348+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:04 vm00 bash[17468]: audit 2026-03-09T18:20:04.612657+0000 mon.a (mon.0) 181 : audit [DBG] from='client.? 192.168.123.100:0/2967410513' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:04 vm00 bash[17468]: audit 2026-03-09T18:20:04.625050+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:04 vm00 bash[17468]: audit 2026-03-09T18:20:04.626075+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:04 vm00 bash[17468]: audit 2026-03-09T18:20:04.628725+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:04 vm00 bash[17468]: audit 2026-03-09T18:20:04.629796+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:04 vm00 bash[17468]: audit 2026-03-09T18:20:04.718525+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:04.924 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:04 vm00 bash[17468]: audit 2026-03-09T18:20:04.736032+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:04 vm08 bash[17774]: cephadm 2026-03-09T18:20:03.649574+0000 mgr.y (mgr.14152) 21 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T18:20:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:04 vm08 bash[17774]: cephadm 2026-03-09T18:20:03.657181+0000 mgr.y (mgr.14152) 22 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:20:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:04 vm08 bash[17774]: cephadm 2026-03-09T18:20:03.733877+0000 mgr.y (mgr.14152) 23 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:20:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:04 vm08 bash[17774]: audit 2026-03-09T18:20:03.742713+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:04 vm08 bash[17774]: audit 2026-03-09T18:20:03.796845+0000 mon.a (mon.0) 166 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:04 vm08 bash[17774]: audit 2026-03-09T18:20:03.804224+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:04 vm08 bash[17774]: cephadm 2026-03-09T18:20:03.805024+0000 mgr.y (mgr.14152) 24 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T18:20:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:04 vm08 bash[17774]: audit 2026-03-09T18:20:03.805560+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:20:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:04 vm08 bash[17774]: audit 2026-03-09T18:20:03.806275+0000 mon.a (mon.0) 169 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:20:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:04 vm08 bash[17774]: audit 2026-03-09T18:20:03.806928+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:04 vm08 bash[17774]: cephadm 2026-03-09T18:20:03.807622+0000 mgr.y (mgr.14152) 25 : cephadm [INF] Reconfiguring daemon mon.c on vm00 2026-03-09T18:20:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:04 vm08 bash[17774]: audit 2026-03-09T18:20:03.965264+0000 mon.a (mon.0) 171 : audit [DBG] from='client.? 192.168.123.108:0/3521809177' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T18:20:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:04 vm08 bash[17774]: audit 2026-03-09T18:20:04.051882+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:04 vm08 bash[17774]: audit 2026-03-09T18:20:04.053114+0000 mon.a (mon.0) 173 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:20:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:04 vm08 bash[17774]: audit 2026-03-09T18:20:04.053988+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:20:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:04 vm08 bash[17774]: audit 2026-03-09T18:20:04.054760+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:04 vm08 bash[17774]: audit 2026-03-09T18:20:04.388373+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:04 vm08 bash[17774]: audit 2026-03-09T18:20:04.389731+0000 mon.a (mon.0) 177 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:20:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:04 vm08 bash[17774]: audit 2026-03-09T18:20:04.390628+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:20:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:04 vm08 bash[17774]: audit 2026-03-09T18:20:04.391712+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:04 vm08 bash[17774]: audit 2026-03-09T18:20:04.563348+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:04 vm08 bash[17774]: audit 2026-03-09T18:20:04.612657+0000 mon.a (mon.0) 181 : audit [DBG] from='client.? 192.168.123.100:0/2967410513' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:04 vm08 bash[17774]: audit 2026-03-09T18:20:04.625050+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:04 vm08 bash[17774]: audit 2026-03-09T18:20:04.626075+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:04 vm08 bash[17774]: audit 2026-03-09T18:20:04.628725+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:04 vm08 bash[17774]: audit 2026-03-09T18:20:04.629796+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:04.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:04 vm08 bash[17774]: audit 2026-03-09T18:20:04.718525+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:04.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:04 vm08 bash[17774]: audit 2026-03-09T18:20:04.736032+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:05.367 INFO:teuthology.orchestra.run.vm08.stdout:Scheduled mgr update... 2026-03-09T18:20:05.418 DEBUG:teuthology.orchestra.run.vm08:mgr.x> sudo journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mgr.x.service 2026-03-09T18:20:05.419 INFO:tasks.cephadm:Deploying OSDs... 2026-03-09T18:20:05.419 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-09T18:20:05.419 DEBUG:teuthology.orchestra.run.vm00:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T18:20:05.423 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:20:05.423 DEBUG:teuthology.orchestra.run.vm00:> ls /dev/[sv]d? 2026-03-09T18:20:05.469 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vda 2026-03-09T18:20:05.470 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vdb 2026-03-09T18:20:05.470 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vdc 2026-03-09T18:20:05.470 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vdd 2026-03-09T18:20:05.470 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vde 2026-03-09T18:20:05.470 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T18:20:05.470 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T18:20:05.470 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vdb 2026-03-09T18:20:05.514 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vdb 2026-03-09T18:20:05.514 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T18:20:05.514 INFO:teuthology.orchestra.run.vm00.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-09T18:20:05.514 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T18:20:05.514 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-09 18:19:47.821061549 +0000 2026-03-09T18:20:05.514 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-09 18:19:46.973061549 +0000 2026-03-09T18:20:05.514 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-09 18:19:46.973061549 +0000 2026-03-09T18:20:05.514 INFO:teuthology.orchestra.run.vm00.stdout: Birth: - 2026-03-09T18:20:05.514 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T18:20:05.562 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-09T18:20:05.562 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-09T18:20:05.562 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000238847 s, 2.1 MB/s 2026-03-09T18:20:05.562 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T18:20:05.610 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vdc 2026-03-09T18:20:05.658 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vdc 2026-03-09T18:20:05.658 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T18:20:05.658 INFO:teuthology.orchestra.run.vm00.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-09T18:20:05.658 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T18:20:05.658 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-09 18:19:47.933061549 +0000 2026-03-09T18:20:05.658 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-09 18:19:46.985061549 +0000 2026-03-09T18:20:05.658 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-09 18:19:46.985061549 +0000 2026-03-09T18:20:05.658 INFO:teuthology.orchestra.run.vm00.stdout: Birth: - 2026-03-09T18:20:05.658 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T18:20:05.710 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-09T18:20:05.710 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-09T18:20:05.710 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000222276 s, 2.3 MB/s 2026-03-09T18:20:05.710 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T18:20:05.763 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vdd 2026-03-09T18:20:05.810 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vdd 2026-03-09T18:20:05.810 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T18:20:05.810 INFO:teuthology.orchestra.run.vm00.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-09T18:20:05.810 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T18:20:05.810 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-09 18:19:48.049061549 +0000 2026-03-09T18:20:05.810 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-09 18:19:47.001061549 +0000 2026-03-09T18:20:05.810 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-09 18:19:47.001061549 +0000 2026-03-09T18:20:05.810 INFO:teuthology.orchestra.run.vm00.stdout: Birth: - 2026-03-09T18:20:05.810 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T18:20:05.861 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:05 vm00 bash[22468]: cephadm 2026-03-09T18:20:04.052583+0000 mgr.y (mgr.14152) 26 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:05 vm00 bash[22468]: cephadm 2026-03-09T18:20:04.055446+0000 mgr.y (mgr.14152) 27 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:05 vm00 bash[22468]: cluster 2026-03-09T18:20:04.376627+0000 mgr.y (mgr.14152) 28 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:05 vm00 bash[22468]: cephadm 2026-03-09T18:20:04.389151+0000 mgr.y (mgr.14152) 29 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:05 vm00 bash[22468]: cephadm 2026-03-09T18:20:04.392648+0000 mgr.y (mgr.14152) 30 : cephadm [INF] Reconfiguring daemon mon.b on vm08 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:05 vm00 bash[22468]: cephadm 2026-03-09T18:20:04.630750+0000 mgr.y (mgr.14152) 31 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:05 vm00 bash[22468]: cephadm 2026-03-09T18:20:04.631154+0000 mgr.y (mgr.14152) 32 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:05 vm00 bash[22468]: audit 2026-03-09T18:20:04.741121+0000 mon.a (mon.0) 188 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:05 vm00 bash[22468]: audit 2026-03-09T18:20:05.365282+0000 mon.a (mon.0) 189 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:05 vm00 bash[22468]: audit 2026-03-09T18:20:05.412068+0000 mon.a (mon.0) 190 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:05 vm00 bash[22468]: audit 2026-03-09T18:20:05.412901+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:05 vm00 bash[22468]: audit 2026-03-09T18:20:05.413387+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:05 vm00 bash[22468]: audit 2026-03-09T18:20:05.423223+0000 mon.a (mon.0) 193 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:05 vm00 bash[22468]: audit 2026-03-09T18:20:05.424191+0000 mon.a (mon.0) 194 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:05 vm00 bash[22468]: audit 2026-03-09T18:20:05.428563+0000 mon.a (mon.0) 195 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:05 vm00 bash[22468]: audit 2026-03-09T18:20:05.429103+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:05 vm00 bash[22468]: audit 2026-03-09T18:20:05.429653+0000 mon.a (mon.0) 197 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:05 vm00 bash[17468]: cephadm 2026-03-09T18:20:04.052583+0000 mgr.y (mgr.14152) 26 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:05 vm00 bash[17468]: cephadm 2026-03-09T18:20:04.055446+0000 mgr.y (mgr.14152) 27 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:05 vm00 bash[17468]: cluster 2026-03-09T18:20:04.376627+0000 mgr.y (mgr.14152) 28 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:05 vm00 bash[17468]: cephadm 2026-03-09T18:20:04.389151+0000 mgr.y (mgr.14152) 29 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:05 vm00 bash[17468]: cephadm 2026-03-09T18:20:04.392648+0000 mgr.y (mgr.14152) 30 : cephadm [INF] Reconfiguring daemon mon.b on vm08 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:05 vm00 bash[17468]: cephadm 2026-03-09T18:20:04.630750+0000 mgr.y (mgr.14152) 31 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:05 vm00 bash[17468]: cephadm 2026-03-09T18:20:04.631154+0000 mgr.y (mgr.14152) 32 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:05 vm00 bash[17468]: audit 2026-03-09T18:20:04.741121+0000 mon.a (mon.0) 188 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:05 vm00 bash[17468]: audit 2026-03-09T18:20:05.365282+0000 mon.a (mon.0) 189 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:05 vm00 bash[17468]: audit 2026-03-09T18:20:05.412068+0000 mon.a (mon.0) 190 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:05 vm00 bash[17468]: audit 2026-03-09T18:20:05.412901+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:05 vm00 bash[17468]: audit 2026-03-09T18:20:05.413387+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:05 vm00 bash[17468]: audit 2026-03-09T18:20:05.423223+0000 mon.a (mon.0) 193 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:05 vm00 bash[17468]: audit 2026-03-09T18:20:05.424191+0000 mon.a (mon.0) 194 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:05 vm00 bash[17468]: audit 2026-03-09T18:20:05.428563+0000 mon.a (mon.0) 195 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T18:20:05.862 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:05 vm00 bash[17468]: audit 2026-03-09T18:20:05.429103+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:20:05.863 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:05 vm00 bash[17468]: audit 2026-03-09T18:20:05.429653+0000 mon.a (mon.0) 197 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:05.863 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-09T18:20:05.863 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-09T18:20:05.863 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000188372 s, 2.7 MB/s 2026-03-09T18:20:05.864 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T18:20:05.911 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vde 2026-03-09T18:20:05.958 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vde 2026-03-09T18:20:05.959 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T18:20:05.959 INFO:teuthology.orchestra.run.vm00.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-09T18:20:05.959 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T18:20:05.959 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-09 18:19:48.141061549 +0000 2026-03-09T18:20:05.959 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-09 18:19:46.989061549 +0000 2026-03-09T18:20:05.959 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-09 18:19:46.989061549 +0000 2026-03-09T18:20:05.959 INFO:teuthology.orchestra.run.vm00.stdout: Birth: - 2026-03-09T18:20:05.959 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T18:20:05.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:05 vm08 bash[17774]: cephadm 2026-03-09T18:20:04.052583+0000 mgr.y (mgr.14152) 26 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T18:20:05.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:05 vm08 bash[17774]: cephadm 2026-03-09T18:20:04.055446+0000 mgr.y (mgr.14152) 27 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-09T18:20:05.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:05 vm08 bash[17774]: cluster 2026-03-09T18:20:04.376627+0000 mgr.y (mgr.14152) 28 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:05.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:05 vm08 bash[17774]: cephadm 2026-03-09T18:20:04.389151+0000 mgr.y (mgr.14152) 29 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T18:20:05.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:05 vm08 bash[17774]: cephadm 2026-03-09T18:20:04.392648+0000 mgr.y (mgr.14152) 30 : cephadm [INF] Reconfiguring daemon mon.b on vm08 2026-03-09T18:20:05.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:05 vm08 bash[17774]: cephadm 2026-03-09T18:20:04.630750+0000 mgr.y (mgr.14152) 31 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:20:05.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:05 vm08 bash[17774]: cephadm 2026-03-09T18:20:04.631154+0000 mgr.y (mgr.14152) 32 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T18:20:05.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:05 vm08 bash[17774]: audit 2026-03-09T18:20:04.741121+0000 mon.a (mon.0) 188 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:05.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:05 vm08 bash[17774]: audit 2026-03-09T18:20:05.365282+0000 mon.a (mon.0) 189 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:05.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:05 vm08 bash[17774]: audit 2026-03-09T18:20:05.412068+0000 mon.a (mon.0) 190 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:05.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:05 vm08 bash[17774]: audit 2026-03-09T18:20:05.412901+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:05.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:05 vm08 bash[17774]: audit 2026-03-09T18:20:05.413387+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:05.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:05 vm08 bash[17774]: audit 2026-03-09T18:20:05.423223+0000 mon.a (mon.0) 193 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:05.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:05 vm08 bash[17774]: audit 2026-03-09T18:20:05.424191+0000 mon.a (mon.0) 194 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:20:05.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:05 vm08 bash[17774]: audit 2026-03-09T18:20:05.428563+0000 mon.a (mon.0) 195 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T18:20:05.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:05 vm08 bash[17774]: audit 2026-03-09T18:20:05.429103+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:20:05.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:05 vm08 bash[17774]: audit 2026-03-09T18:20:05.429653+0000 mon.a (mon.0) 197 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:05.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:05 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:20:06.010 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-09T18:20:06.011 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-09T18:20:06.011 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000321331 s, 1.6 MB/s 2026-03-09T18:20:06.011 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T18:20:06.059 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-09T18:20:06.059 DEBUG:teuthology.orchestra.run.vm08:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T18:20:06.065 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:20:06.066 DEBUG:teuthology.orchestra.run.vm08:> ls /dev/[sv]d? 2026-03-09T18:20:06.115 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vda 2026-03-09T18:20:06.115 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vdb 2026-03-09T18:20:06.115 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vdc 2026-03-09T18:20:06.115 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vdd 2026-03-09T18:20:06.115 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vde 2026-03-09T18:20:06.115 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T18:20:06.115 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T18:20:06.115 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vdb 2026-03-09T18:20:06.163 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vdb 2026-03-09T18:20:06.163 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T18:20:06.163 INFO:teuthology.orchestra.run.vm08.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-09T18:20:06.163 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T18:20:06.163 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-09 18:19:51.569170273 +0000 2026-03-09T18:20:06.163 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-09 18:19:50.685170273 +0000 2026-03-09T18:20:06.163 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-09 18:19:50.685170273 +0000 2026-03-09T18:20:06.163 INFO:teuthology.orchestra.run.vm08.stdout: Birth: - 2026-03-09T18:20:06.163 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T18:20:06.219 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-09T18:20:06.220 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-09T18:20:06.220 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.000341047 s, 1.5 MB/s 2026-03-09T18:20:06.220 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T18:20:06.271 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vdc 2026-03-09T18:20:06.315 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vdc 2026-03-09T18:20:06.315 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T18:20:06.315 INFO:teuthology.orchestra.run.vm08.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-09T18:20:06.315 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T18:20:06.315 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-09 18:19:51.669170273 +0000 2026-03-09T18:20:06.315 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-09 18:19:50.685170273 +0000 2026-03-09T18:20:06.315 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-09 18:19:50.685170273 +0000 2026-03-09T18:20:06.315 INFO:teuthology.orchestra.run.vm08.stdout: Birth: - 2026-03-09T18:20:06.315 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T18:20:06.325 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:06 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:20:06.325 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:20:06 vm08 bash[18535]: debug 2026-03-09T18:20:06.237+0000 7f523c6a1700 1 -- 192.168.123.108:0/2001069499 <== mon.2 v2:192.168.123.108:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 194+0+0 (secure 0 0 0) 0x55dd8eb98340 con 0x55dd8f914400 2026-03-09T18:20:06.325 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:20:06 vm08 bash[18535]: debug 2026-03-09T18:20:06.321+0000 7f5245310000 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T18:20:06.332 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-09T18:20:06.333 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-09T18:20:06.333 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.000200144 s, 2.6 MB/s 2026-03-09T18:20:06.333 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T18:20:06.384 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vdd 2026-03-09T18:20:06.432 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vdd 2026-03-09T18:20:06.432 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T18:20:06.432 INFO:teuthology.orchestra.run.vm08.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-09T18:20:06.432 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T18:20:06.432 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-09 18:19:51.757170273 +0000 2026-03-09T18:20:06.432 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-09 18:19:50.685170273 +0000 2026-03-09T18:20:06.432 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-09 18:19:50.685170273 +0000 2026-03-09T18:20:06.432 INFO:teuthology.orchestra.run.vm08.stdout: Birth: - 2026-03-09T18:20:06.432 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T18:20:06.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:20:06 vm08 bash[18535]: debug 2026-03-09T18:20:06.369+0000 7f5245310000 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T18:20:06.483 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-09T18:20:06.483 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-09T18:20:06.483 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.000177051 s, 2.9 MB/s 2026-03-09T18:20:06.483 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T18:20:06.529 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vde 2026-03-09T18:20:06.575 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vde 2026-03-09T18:20:06.575 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T18:20:06.575 INFO:teuthology.orchestra.run.vm08.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-09T18:20:06.575 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T18:20:06.575 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-09 18:19:51.849170273 +0000 2026-03-09T18:20:06.575 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-09 18:19:50.685170273 +0000 2026-03-09T18:20:06.575 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-09 18:19:50.685170273 +0000 2026-03-09T18:20:06.575 INFO:teuthology.orchestra.run.vm08.stdout: Birth: - 2026-03-09T18:20:06.575 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T18:20:06.623 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-09T18:20:06.623 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-09T18:20:06.623 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.000126808 s, 4.0 MB/s 2026-03-09T18:20:06.624 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T18:20:06.668 INFO:tasks.cephadm:Deploying osd.0 on vm00 with /dev/vde... 2026-03-09T18:20:06.668 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- lvm zap /dev/vde 2026-03-09T18:20:06.752 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:20:06 vm08 bash[18535]: debug 2026-03-09T18:20:06.709+0000 7f5245310000 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T18:20:06.933 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:06 vm00 bash[22468]: audit 2026-03-09T18:20:05.358536+0000 mgr.y (mgr.14152) 33 : audit [DBG] from='client.24104 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=y;vm08=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:20:06.933 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:06 vm00 bash[22468]: cephadm 2026-03-09T18:20:05.359449+0000 mgr.y (mgr.14152) 34 : cephadm [INF] Saving service mgr spec with placement vm00=y;vm08=x;count:2 2026-03-09T18:20:06.933 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:06 vm00 bash[22468]: cephadm 2026-03-09T18:20:05.430273+0000 mgr.y (mgr.14152) 35 : cephadm [INF] Deploying daemon mgr.x on vm08 2026-03-09T18:20:06.933 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:06 vm00 bash[22468]: audit 2026-03-09T18:20:06.082323+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:06.933 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:06 vm00 bash[22468]: audit 2026-03-09T18:20:06.083359+0000 mon.a (mon.0) 199 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:06.933 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:06 vm00 bash[22468]: audit 2026-03-09T18:20:06.083901+0000 mon.a (mon.0) 200 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:06.933 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:06 vm00 bash[22468]: audit 2026-03-09T18:20:06.084254+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:06.933 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:06 vm00 bash[17468]: audit 2026-03-09T18:20:05.358536+0000 mgr.y (mgr.14152) 33 : audit [DBG] from='client.24104 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=y;vm08=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:20:06.933 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:06 vm00 bash[17468]: cephadm 2026-03-09T18:20:05.359449+0000 mgr.y (mgr.14152) 34 : cephadm [INF] Saving service mgr spec with placement vm00=y;vm08=x;count:2 2026-03-09T18:20:06.933 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:06 vm00 bash[17468]: cephadm 2026-03-09T18:20:05.430273+0000 mgr.y (mgr.14152) 35 : cephadm [INF] Deploying daemon mgr.x on vm08 2026-03-09T18:20:06.934 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:06 vm00 bash[17468]: audit 2026-03-09T18:20:06.082323+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:06.934 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:06 vm00 bash[17468]: audit 2026-03-09T18:20:06.083359+0000 mon.a (mon.0) 199 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:06.934 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:06 vm00 bash[17468]: audit 2026-03-09T18:20:06.083901+0000 mon.a (mon.0) 200 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:06.934 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:06 vm00 bash[17468]: audit 2026-03-09T18:20:06.084254+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:06 vm08 bash[17774]: audit 2026-03-09T18:20:05.358536+0000 mgr.y (mgr.14152) 33 : audit [DBG] from='client.24104 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=y;vm08=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:20:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:06 vm08 bash[17774]: cephadm 2026-03-09T18:20:05.359449+0000 mgr.y (mgr.14152) 34 : cephadm [INF] Saving service mgr spec with placement vm00=y;vm08=x;count:2 2026-03-09T18:20:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:06 vm08 bash[17774]: cephadm 2026-03-09T18:20:05.430273+0000 mgr.y (mgr.14152) 35 : cephadm [INF] Deploying daemon mgr.x on vm08 2026-03-09T18:20:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:06 vm08 bash[17774]: audit 2026-03-09T18:20:06.082323+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:06 vm08 bash[17774]: audit 2026-03-09T18:20:06.083359+0000 mon.a (mon.0) 199 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:06 vm08 bash[17774]: audit 2026-03-09T18:20:06.083901+0000 mon.a (mon.0) 200 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:06 vm08 bash[17774]: audit 2026-03-09T18:20:06.084254+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:07.307 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:20:07.318 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph orch daemon add osd vm00:/dev/vde 2026-03-09T18:20:07.573 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:20:07 vm08 bash[18535]: debug 2026-03-09T18:20:07.253+0000 7f5245310000 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T18:20:07.573 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:20:07 vm08 bash[18535]: debug 2026-03-09T18:20:07.353+0000 7f5245310000 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T18:20:07.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:07 vm00 bash[17468]: cluster 2026-03-09T18:20:06.376826+0000 mgr.y (mgr.14152) 36 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:07.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:07 vm00 bash[17468]: audit 2026-03-09T18:20:07.724551+0000 mon.a (mon.0) 202 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:20:07.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:07 vm00 bash[17468]: audit 2026-03-09T18:20:07.726415+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:20:07.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:07 vm00 bash[17468]: audit 2026-03-09T18:20:07.726939+0000 mon.a (mon.0) 204 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:07.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:07 vm00 bash[22468]: cluster 2026-03-09T18:20:06.376826+0000 mgr.y (mgr.14152) 36 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:07.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:07 vm00 bash[22468]: audit 2026-03-09T18:20:07.724551+0000 mon.a (mon.0) 202 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:20:07.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:07 vm00 bash[22468]: audit 2026-03-09T18:20:07.726415+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:20:07.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:07 vm00 bash[22468]: audit 2026-03-09T18:20:07.726939+0000 mon.a (mon.0) 204 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:07.893 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:07 vm08 bash[17774]: cluster 2026-03-09T18:20:06.376826+0000 mgr.y (mgr.14152) 36 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:07.893 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:07 vm08 bash[17774]: audit 2026-03-09T18:20:07.724551+0000 mon.a (mon.0) 202 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:20:07.893 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:07 vm08 bash[17774]: audit 2026-03-09T18:20:07.726415+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:20:07.893 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:07 vm08 bash[17774]: audit 2026-03-09T18:20:07.726939+0000 mon.a (mon.0) 204 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:07.893 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:20:07 vm08 bash[18535]: debug 2026-03-09T18:20:07.569+0000 7f5245310000 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T18:20:07.893 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:20:07 vm08 bash[18535]: debug 2026-03-09T18:20:07.677+0000 7f5245310000 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T18:20:07.893 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:20:07 vm08 bash[18535]: debug 2026-03-09T18:20:07.737+0000 7f5245310000 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T18:20:08.225 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:20:07 vm08 bash[18535]: debug 2026-03-09T18:20:07.889+0000 7f5245310000 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T18:20:08.225 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:20:07 vm08 bash[18535]: debug 2026-03-09T18:20:07.953+0000 7f5245310000 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T18:20:08.225 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:20:08 vm08 bash[18535]: debug 2026-03-09T18:20:08.025+0000 7f5245310000 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T18:20:08.907 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:20:08 vm08 bash[18535]: debug 2026-03-09T18:20:08.561+0000 7f5245310000 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T18:20:08.907 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:20:08 vm08 bash[18535]: debug 2026-03-09T18:20:08.617+0000 7f5245310000 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T18:20:08.907 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:20:08 vm08 bash[18535]: debug 2026-03-09T18:20:08.685+0000 7f5245310000 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T18:20:08.907 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:08 vm08 bash[17774]: audit 2026-03-09T18:20:07.723120+0000 mgr.y (mgr.14152) 37 : audit [DBG] from='client.24112 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:20:08.907 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:08 vm08 bash[17774]: audit 2026-03-09T18:20:08.650148+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:09.117 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:08 vm00 bash[22468]: audit 2026-03-09T18:20:07.723120+0000 mgr.y (mgr.14152) 37 : audit [DBG] from='client.24112 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:20:09.117 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:08 vm00 bash[22468]: audit 2026-03-09T18:20:08.650148+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:09.117 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:08 vm00 bash[17468]: audit 2026-03-09T18:20:07.723120+0000 mgr.y (mgr.14152) 37 : audit [DBG] from='client.24112 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:20:09.117 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:08 vm00 bash[17468]: audit 2026-03-09T18:20:08.650148+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:09.185 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:20:09 vm08 bash[18535]: debug 2026-03-09T18:20:09.053+0000 7f5245310000 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T18:20:09.185 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:20:09 vm08 bash[18535]: debug 2026-03-09T18:20:09.113+0000 7f5245310000 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T18:20:09.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:20:09 vm08 bash[18535]: debug 2026-03-09T18:20:09.181+0000 7f5245310000 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T18:20:09.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:20:09 vm08 bash[18535]: debug 2026-03-09T18:20:09.277+0000 7f5245310000 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:20:09.892 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:09 vm08 bash[17774]: cluster 2026-03-09T18:20:08.377056+0000 mgr.y (mgr.14152) 38 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:09.892 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:09 vm08 bash[17774]: audit 2026-03-09T18:20:08.912700+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:09.892 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:09 vm08 bash[17774]: audit 2026-03-09T18:20:08.916689+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:09.892 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:09 vm08 bash[17774]: audit 2026-03-09T18:20:08.918275+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:20:09.892 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:09 vm08 bash[17774]: audit 2026-03-09T18:20:08.918861+0000 mon.a (mon.0) 209 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:20:09.892 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:09 vm08 bash[17774]: audit 2026-03-09T18:20:08.919362+0000 mon.a (mon.0) 210 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:09.892 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:09 vm08 bash[17774]: audit 2026-03-09T18:20:09.199903+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:09.892 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:09 vm08 bash[17774]: audit 2026-03-09T18:20:09.201418+0000 mon.a (mon.0) 212 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:09.892 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:09 vm08 bash[17774]: audit 2026-03-09T18:20:09.202074+0000 mon.a (mon.0) 213 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:09.892 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:09 vm08 bash[17774]: audit 2026-03-09T18:20:09.202520+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:09.892 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:09 vm08 bash[17774]: audit 2026-03-09T18:20:09.206250+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:09.892 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:20:09 vm08 bash[18535]: debug 2026-03-09T18:20:09.625+0000 7f5245310000 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T18:20:09.892 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:20:09 vm08 bash[18535]: debug 2026-03-09T18:20:09.825+0000 7f5245310000 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T18:20:10.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:09 vm00 bash[22468]: cluster 2026-03-09T18:20:08.377056+0000 mgr.y (mgr.14152) 38 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:10.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:09 vm00 bash[22468]: audit 2026-03-09T18:20:08.912700+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:10.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:09 vm00 bash[22468]: audit 2026-03-09T18:20:08.916689+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:10.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:09 vm00 bash[22468]: audit 2026-03-09T18:20:08.918275+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:20:10.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:09 vm00 bash[22468]: audit 2026-03-09T18:20:08.918861+0000 mon.a (mon.0) 209 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:20:10.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:09 vm00 bash[22468]: audit 2026-03-09T18:20:08.919362+0000 mon.a (mon.0) 210 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:10.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:09 vm00 bash[22468]: audit 2026-03-09T18:20:09.199903+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:10.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:09 vm00 bash[22468]: audit 2026-03-09T18:20:09.201418+0000 mon.a (mon.0) 212 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:10.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:09 vm00 bash[22468]: audit 2026-03-09T18:20:09.202074+0000 mon.a (mon.0) 213 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:10.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:09 vm00 bash[22468]: audit 2026-03-09T18:20:09.202520+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:10.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:09 vm00 bash[22468]: audit 2026-03-09T18:20:09.206250+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:10.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:09 vm00 bash[17468]: cluster 2026-03-09T18:20:08.377056+0000 mgr.y (mgr.14152) 38 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:10.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:09 vm00 bash[17468]: audit 2026-03-09T18:20:08.912700+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:10.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:09 vm00 bash[17468]: audit 2026-03-09T18:20:08.916689+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:10.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:09 vm00 bash[17468]: audit 2026-03-09T18:20:08.918275+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:20:10.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:09 vm00 bash[17468]: audit 2026-03-09T18:20:08.918861+0000 mon.a (mon.0) 209 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:20:10.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:09 vm00 bash[17468]: audit 2026-03-09T18:20:08.919362+0000 mon.a (mon.0) 210 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:10.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:09 vm00 bash[17468]: audit 2026-03-09T18:20:09.199903+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:10.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:09 vm00 bash[17468]: audit 2026-03-09T18:20:09.201418+0000 mon.a (mon.0) 212 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:10.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:09 vm00 bash[17468]: audit 2026-03-09T18:20:09.202074+0000 mon.a (mon.0) 213 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:10.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:09 vm00 bash[17468]: audit 2026-03-09T18:20:09.202520+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:10.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:09 vm00 bash[17468]: audit 2026-03-09T18:20:09.206250+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:10.143 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:20:09 vm08 bash[18535]: debug 2026-03-09T18:20:09.885+0000 7f5245310000 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T18:20:10.143 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:20:09 vm08 bash[18535]: debug 2026-03-09T18:20:09.957+0000 7f5245310000 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T18:20:10.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:20:10 vm08 bash[18535]: debug 2026-03-09T18:20:10.137+0000 7f5245310000 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:20:11.118 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:10 vm00 bash[22468]: cephadm 2026-03-09T18:20:08.918055+0000 mgr.y (mgr.14152) 39 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-09T18:20:11.119 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:10 vm00 bash[22468]: cephadm 2026-03-09T18:20:08.919902+0000 mgr.y (mgr.14152) 40 : cephadm [INF] Reconfiguring daemon mgr.y on vm00 2026-03-09T18:20:11.119 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:10 vm00 bash[22468]: cluster 2026-03-09T18:20:10.757928+0000 mon.a (mon.0) 216 : cluster [DBG] Standby manager daemon x started 2026-03-09T18:20:11.119 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:10 vm00 bash[22468]: audit 2026-03-09T18:20:10.760549+0000 mon.b (mon.2) 2 : audit [DBG] from='mgr.? 192.168.123.108:0/2524511948' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:20:11.119 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:10 vm00 bash[22468]: audit 2026-03-09T18:20:10.762231+0000 mon.b (mon.2) 3 : audit [DBG] from='mgr.? 192.168.123.108:0/2524511948' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:20:11.119 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:10 vm00 bash[22468]: audit 2026-03-09T18:20:10.763405+0000 mon.b (mon.2) 4 : audit [DBG] from='mgr.? 192.168.123.108:0/2524511948' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:20:11.119 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:10 vm00 bash[22468]: audit 2026-03-09T18:20:10.763860+0000 mon.b (mon.2) 5 : audit [DBG] from='mgr.? 192.168.123.108:0/2524511948' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:20:11.119 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:10 vm00 bash[17468]: cephadm 2026-03-09T18:20:08.918055+0000 mgr.y (mgr.14152) 39 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-09T18:20:11.119 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:10 vm00 bash[17468]: cephadm 2026-03-09T18:20:08.919902+0000 mgr.y (mgr.14152) 40 : cephadm [INF] Reconfiguring daemon mgr.y on vm00 2026-03-09T18:20:11.119 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:10 vm00 bash[17468]: cluster 2026-03-09T18:20:10.757928+0000 mon.a (mon.0) 216 : cluster [DBG] Standby manager daemon x started 2026-03-09T18:20:11.119 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:10 vm00 bash[17468]: audit 2026-03-09T18:20:10.760549+0000 mon.b (mon.2) 2 : audit [DBG] from='mgr.? 192.168.123.108:0/2524511948' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:20:11.119 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:10 vm00 bash[17468]: audit 2026-03-09T18:20:10.762231+0000 mon.b (mon.2) 3 : audit [DBG] from='mgr.? 192.168.123.108:0/2524511948' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:20:11.119 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:10 vm00 bash[17468]: audit 2026-03-09T18:20:10.763405+0000 mon.b (mon.2) 4 : audit [DBG] from='mgr.? 192.168.123.108:0/2524511948' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:20:11.119 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:10 vm00 bash[17468]: audit 2026-03-09T18:20:10.763860+0000 mon.b (mon.2) 5 : audit [DBG] from='mgr.? 192.168.123.108:0/2524511948' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:20:11.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:10 vm08 bash[17774]: cephadm 2026-03-09T18:20:08.918055+0000 mgr.y (mgr.14152) 39 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-09T18:20:11.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:10 vm08 bash[17774]: cephadm 2026-03-09T18:20:08.919902+0000 mgr.y (mgr.14152) 40 : cephadm [INF] Reconfiguring daemon mgr.y on vm00 2026-03-09T18:20:11.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:10 vm08 bash[17774]: cluster 2026-03-09T18:20:10.757928+0000 mon.a (mon.0) 216 : cluster [DBG] Standby manager daemon x started 2026-03-09T18:20:11.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:10 vm08 bash[17774]: audit 2026-03-09T18:20:10.760549+0000 mon.b (mon.2) 2 : audit [DBG] from='mgr.? 192.168.123.108:0/2524511948' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:20:11.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:10 vm08 bash[17774]: audit 2026-03-09T18:20:10.762231+0000 mon.b (mon.2) 3 : audit [DBG] from='mgr.? 192.168.123.108:0/2524511948' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:20:11.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:10 vm08 bash[17774]: audit 2026-03-09T18:20:10.763405+0000 mon.b (mon.2) 4 : audit [DBG] from='mgr.? 192.168.123.108:0/2524511948' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:20:11.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:10 vm08 bash[17774]: audit 2026-03-09T18:20:10.763860+0000 mon.b (mon.2) 5 : audit [DBG] from='mgr.? 192.168.123.108:0/2524511948' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:20:11.225 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:20:10 vm08 bash[18535]: debug 2026-03-09T18:20:10.753+0000 7f5245310000 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T18:20:12.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:11 vm00 bash[22468]: cluster 2026-03-09T18:20:10.377243+0000 mgr.y (mgr.14152) 41 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:12.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:11 vm00 bash[22468]: cluster 2026-03-09T18:20:10.778506+0000 mon.a (mon.0) 217 : cluster [DBG] mgrmap e14: y(active, since 30s), standbys: x 2026-03-09T18:20:12.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:11 vm00 bash[22468]: audit 2026-03-09T18:20:10.778602+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:20:12.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:11 vm00 bash[22468]: audit 2026-03-09T18:20:11.026251+0000 mon.a (mon.0) 219 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b0cac7d6-07bf-4b00-9243-24f6ec5bc470"}]: dispatch 2026-03-09T18:20:12.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:11 vm00 bash[22468]: audit 2026-03-09T18:20:11.027584+0000 mon.b (mon.2) 6 : audit [INF] from='client.? 192.168.123.100:0/1279692907' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b0cac7d6-07bf-4b00-9243-24f6ec5bc470"}]: dispatch 2026-03-09T18:20:12.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:11 vm00 bash[22468]: audit 2026-03-09T18:20:11.031643+0000 mon.a (mon.0) 220 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b0cac7d6-07bf-4b00-9243-24f6ec5bc470"}]': finished 2026-03-09T18:20:12.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:11 vm00 bash[22468]: cluster 2026-03-09T18:20:11.031686+0000 mon.a (mon.0) 221 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T18:20:12.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:11 vm00 bash[22468]: audit 2026-03-09T18:20:11.031808+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:20:12.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:11 vm00 bash[22468]: audit 2026-03-09T18:20:11.689674+0000 mon.b (mon.2) 7 : audit [DBG] from='client.? 192.168.123.100:0/1235378472' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:20:12.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:11 vm00 bash[17468]: cluster 2026-03-09T18:20:10.377243+0000 mgr.y (mgr.14152) 41 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:12.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:11 vm00 bash[17468]: cluster 2026-03-09T18:20:10.778506+0000 mon.a (mon.0) 217 : cluster [DBG] mgrmap e14: y(active, since 30s), standbys: x 2026-03-09T18:20:12.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:11 vm00 bash[17468]: audit 2026-03-09T18:20:10.778602+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:20:12.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:11 vm00 bash[17468]: audit 2026-03-09T18:20:11.026251+0000 mon.a (mon.0) 219 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b0cac7d6-07bf-4b00-9243-24f6ec5bc470"}]: dispatch 2026-03-09T18:20:12.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:11 vm00 bash[17468]: audit 2026-03-09T18:20:11.027584+0000 mon.b (mon.2) 6 : audit [INF] from='client.? 192.168.123.100:0/1279692907' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b0cac7d6-07bf-4b00-9243-24f6ec5bc470"}]: dispatch 2026-03-09T18:20:12.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:11 vm00 bash[17468]: audit 2026-03-09T18:20:11.031643+0000 mon.a (mon.0) 220 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b0cac7d6-07bf-4b00-9243-24f6ec5bc470"}]': finished 2026-03-09T18:20:12.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:11 vm00 bash[17468]: cluster 2026-03-09T18:20:11.031686+0000 mon.a (mon.0) 221 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T18:20:12.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:11 vm00 bash[17468]: audit 2026-03-09T18:20:11.031808+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:20:12.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:11 vm00 bash[17468]: audit 2026-03-09T18:20:11.689674+0000 mon.b (mon.2) 7 : audit [DBG] from='client.? 192.168.123.100:0/1235378472' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:20:12.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:11 vm08 bash[17774]: cluster 2026-03-09T18:20:10.377243+0000 mgr.y (mgr.14152) 41 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:12.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:11 vm08 bash[17774]: cluster 2026-03-09T18:20:10.778506+0000 mon.a (mon.0) 217 : cluster [DBG] mgrmap e14: y(active, since 30s), standbys: x 2026-03-09T18:20:12.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:11 vm08 bash[17774]: audit 2026-03-09T18:20:10.778602+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:20:12.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:11 vm08 bash[17774]: audit 2026-03-09T18:20:11.026251+0000 mon.a (mon.0) 219 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b0cac7d6-07bf-4b00-9243-24f6ec5bc470"}]: dispatch 2026-03-09T18:20:12.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:11 vm08 bash[17774]: audit 2026-03-09T18:20:11.027584+0000 mon.b (mon.2) 6 : audit [INF] from='client.? 192.168.123.100:0/1279692907' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b0cac7d6-07bf-4b00-9243-24f6ec5bc470"}]: dispatch 2026-03-09T18:20:12.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:11 vm08 bash[17774]: audit 2026-03-09T18:20:11.031643+0000 mon.a (mon.0) 220 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b0cac7d6-07bf-4b00-9243-24f6ec5bc470"}]': finished 2026-03-09T18:20:12.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:11 vm08 bash[17774]: cluster 2026-03-09T18:20:11.031686+0000 mon.a (mon.0) 221 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T18:20:12.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:11 vm08 bash[17774]: audit 2026-03-09T18:20:11.031808+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:20:12.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:11 vm08 bash[17774]: audit 2026-03-09T18:20:11.689674+0000 mon.b (mon.2) 7 : audit [DBG] from='client.? 192.168.123.100:0/1235378472' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:20:14.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:13 vm00 bash[17468]: cluster 2026-03-09T18:20:12.377449+0000 mgr.y (mgr.14152) 42 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:14.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:13 vm00 bash[22468]: cluster 2026-03-09T18:20:12.377449+0000 mgr.y (mgr.14152) 42 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:14.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:13 vm08 bash[17774]: cluster 2026-03-09T18:20:12.377449+0000 mgr.y (mgr.14152) 42 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:16.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:15 vm00 bash[17468]: cluster 2026-03-09T18:20:14.377686+0000 mgr.y (mgr.14152) 43 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:16.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:15 vm00 bash[22468]: cluster 2026-03-09T18:20:14.377686+0000 mgr.y (mgr.14152) 43 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:16.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:15 vm08 bash[17774]: cluster 2026-03-09T18:20:14.377686+0000 mgr.y (mgr.14152) 43 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:17.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:17 vm00 bash[17468]: cluster 2026-03-09T18:20:16.377903+0000 mgr.y (mgr.14152) 44 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:17.348 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:17 vm00 bash[22468]: cluster 2026-03-09T18:20:16.377903+0000 mgr.y (mgr.14152) 44 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:17.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:17 vm08 bash[17774]: cluster 2026-03-09T18:20:16.377903+0000 mgr.y (mgr.14152) 44 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:18.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:17 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:20:18.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:18 vm00 bash[22468]: audit 2026-03-09T18:20:17.393448+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T18:20:18.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:18 vm00 bash[22468]: audit 2026-03-09T18:20:17.393983+0000 mon.a (mon.0) 224 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:18.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:18 vm00 bash[22468]: cephadm 2026-03-09T18:20:17.394410+0000 mgr.y (mgr.14152) 45 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-09T18:20:18.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:17 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:20:18.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:18 vm00 bash[17468]: audit 2026-03-09T18:20:17.393448+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T18:20:18.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:18 vm00 bash[17468]: audit 2026-03-09T18:20:17.393983+0000 mon.a (mon.0) 224 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:18.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:18 vm00 bash[17468]: cephadm 2026-03-09T18:20:17.394410+0000 mgr.y (mgr.14152) 45 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-09T18:20:18.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:20:17 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:20:18.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:18 vm08 bash[17774]: audit 2026-03-09T18:20:17.393448+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T18:20:18.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:18 vm08 bash[17774]: audit 2026-03-09T18:20:17.393983+0000 mon.a (mon.0) 224 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:18.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:18 vm08 bash[17774]: cephadm 2026-03-09T18:20:17.394410+0000 mgr.y (mgr.14152) 45 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-09T18:20:18.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:18 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:20:18.634 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:20:18 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:20:18.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:18 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:20:19.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:19 vm00 bash[22468]: audit 2026-03-09T18:20:18.285745+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:19.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:19 vm00 bash[22468]: audit 2026-03-09T18:20:18.292100+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:19.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:19 vm00 bash[22468]: audit 2026-03-09T18:20:18.295505+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:19.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:19 vm00 bash[22468]: audit 2026-03-09T18:20:18.296078+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:19.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:19 vm00 bash[22468]: cluster 2026-03-09T18:20:18.378132+0000 mgr.y (mgr.14152) 46 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:19.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:19 vm00 bash[17468]: audit 2026-03-09T18:20:18.285745+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:19.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:19 vm00 bash[17468]: audit 2026-03-09T18:20:18.292100+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:19.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:19 vm00 bash[17468]: audit 2026-03-09T18:20:18.295505+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:19.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:19 vm00 bash[17468]: audit 2026-03-09T18:20:18.296078+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:19.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:19 vm00 bash[17468]: cluster 2026-03-09T18:20:18.378132+0000 mgr.y (mgr.14152) 46 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:19.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:19 vm08 bash[17774]: audit 2026-03-09T18:20:18.285745+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:19.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:19 vm08 bash[17774]: audit 2026-03-09T18:20:18.292100+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:19.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:19 vm08 bash[17774]: audit 2026-03-09T18:20:18.295505+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:19.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:19 vm08 bash[17774]: audit 2026-03-09T18:20:18.296078+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:19.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:19 vm08 bash[17774]: cluster 2026-03-09T18:20:18.378132+0000 mgr.y (mgr.14152) 46 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:21.613 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:21 vm00 bash[22468]: cluster 2026-03-09T18:20:20.378331+0000 mgr.y (mgr.14152) 47 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:21.613 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:21 vm00 bash[22468]: audit 2026-03-09T18:20:21.250065+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:21.613 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:21 vm00 bash[22468]: audit 2026-03-09T18:20:21.253694+0000 mon.a (mon.0) 230 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/4034633438,v1:192.168.123.100:6803/4034633438]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T18:20:21.613 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:21 vm00 bash[22468]: audit 2026-03-09T18:20:21.255264+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:21.613 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:21 vm00 bash[17468]: cluster 2026-03-09T18:20:20.378331+0000 mgr.y (mgr.14152) 47 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:21.613 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:21 vm00 bash[17468]: audit 2026-03-09T18:20:21.250065+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:21.613 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:21 vm00 bash[17468]: audit 2026-03-09T18:20:21.253694+0000 mon.a (mon.0) 230 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/4034633438,v1:192.168.123.100:6803/4034633438]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T18:20:21.613 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:21 vm00 bash[17468]: audit 2026-03-09T18:20:21.255264+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:21.670 INFO:teuthology.orchestra.run.vm00.stdout:Created osd(s) 0 on host 'vm00' 2026-03-09T18:20:21.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:21 vm08 bash[17774]: cluster 2026-03-09T18:20:20.378331+0000 mgr.y (mgr.14152) 47 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:21.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:21 vm08 bash[17774]: audit 2026-03-09T18:20:21.250065+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:21.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:21 vm08 bash[17774]: audit 2026-03-09T18:20:21.253694+0000 mon.a (mon.0) 230 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/4034633438,v1:192.168.123.100:6803/4034633438]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T18:20:21.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:21 vm08 bash[17774]: audit 2026-03-09T18:20:21.255264+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:21.756 DEBUG:teuthology.orchestra.run.vm00:osd.0> sudo journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.0.service 2026-03-09T18:20:21.757 INFO:tasks.cephadm:Deploying osd.1 on vm00 with /dev/vdd... 2026-03-09T18:20:21.757 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- lvm zap /dev/vdd 2026-03-09T18:20:22.374 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:20:22.382 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph orch daemon add osd vm00:/dev/vdd 2026-03-09T18:20:22.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:22 vm00 bash[22468]: audit 2026-03-09T18:20:21.665504+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:22.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:22 vm00 bash[22468]: audit 2026-03-09T18:20:21.670771+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:22.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:22 vm00 bash[22468]: audit 2026-03-09T18:20:21.671783+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:22.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:22 vm00 bash[22468]: audit 2026-03-09T18:20:21.677736+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:22.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:22 vm00 bash[22468]: audit 2026-03-09T18:20:22.254206+0000 mon.a (mon.0) 236 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/4034633438,v1:192.168.123.100:6803/4034633438]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T18:20:22.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:22 vm00 bash[22468]: cluster 2026-03-09T18:20:22.254232+0000 mon.a (mon.0) 237 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T18:20:22.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:22 vm00 bash[22468]: audit 2026-03-09T18:20:22.254717+0000 mon.a (mon.0) 238 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:20:22.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:22 vm00 bash[22468]: audit 2026-03-09T18:20:22.255650+0000 mon.a (mon.0) 239 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/4034633438,v1:192.168.123.100:6803/4034633438]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:20:22.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:22 vm00 bash[17468]: audit 2026-03-09T18:20:21.665504+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:22.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:22 vm00 bash[17468]: audit 2026-03-09T18:20:21.670771+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:22.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:22 vm00 bash[17468]: audit 2026-03-09T18:20:21.671783+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:22.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:22 vm00 bash[17468]: audit 2026-03-09T18:20:21.677736+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:22.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:22 vm00 bash[17468]: audit 2026-03-09T18:20:22.254206+0000 mon.a (mon.0) 236 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/4034633438,v1:192.168.123.100:6803/4034633438]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T18:20:22.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:22 vm00 bash[17468]: cluster 2026-03-09T18:20:22.254232+0000 mon.a (mon.0) 237 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T18:20:22.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:22 vm00 bash[17468]: audit 2026-03-09T18:20:22.254717+0000 mon.a (mon.0) 238 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:20:22.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:22 vm00 bash[17468]: audit 2026-03-09T18:20:22.255650+0000 mon.a (mon.0) 239 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/4034633438,v1:192.168.123.100:6803/4034633438]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:20:22.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:22 vm08 bash[17774]: audit 2026-03-09T18:20:21.665504+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:22.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:22 vm08 bash[17774]: audit 2026-03-09T18:20:21.670771+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:22.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:22 vm08 bash[17774]: audit 2026-03-09T18:20:21.671783+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:22.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:22 vm08 bash[17774]: audit 2026-03-09T18:20:21.677736+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:22.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:22 vm08 bash[17774]: audit 2026-03-09T18:20:22.254206+0000 mon.a (mon.0) 236 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/4034633438,v1:192.168.123.100:6803/4034633438]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T18:20:22.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:22 vm08 bash[17774]: cluster 2026-03-09T18:20:22.254232+0000 mon.a (mon.0) 237 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T18:20:22.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:22 vm08 bash[17774]: audit 2026-03-09T18:20:22.254717+0000 mon.a (mon.0) 238 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:20:22.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:22 vm08 bash[17774]: audit 2026-03-09T18:20:22.255650+0000 mon.a (mon.0) 239 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/4034633438,v1:192.168.123.100:6803/4034633438]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:20:23.634 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:20:23 vm00 bash[25170]: debug 2026-03-09T18:20:23.257+0000 7fcb2ef37700 -1 osd.0 0 waiting for initial osdmap 2026-03-09T18:20:23.634 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:20:23 vm00 bash[25170]: debug 2026-03-09T18:20:23.261+0000 7fcb2a0cf700 -1 osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:20:23.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:23 vm08 bash[17774]: cluster 2026-03-09T18:20:22.378724+0000 mgr.y (mgr.14152) 48 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:23.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:23 vm08 bash[17774]: audit 2026-03-09T18:20:22.798553+0000 mgr.y (mgr.14152) 49 : audit [DBG] from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:20:23.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:23 vm08 bash[17774]: audit 2026-03-09T18:20:22.799995+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:20:23.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:23 vm08 bash[17774]: audit 2026-03-09T18:20:22.801867+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:20:23.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:23 vm08 bash[17774]: audit 2026-03-09T18:20:22.802455+0000 mon.a (mon.0) 242 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:23.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:23 vm08 bash[17774]: audit 2026-03-09T18:20:23.256196+0000 mon.a (mon.0) 243 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/4034633438,v1:192.168.123.100:6803/4034633438]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T18:20:23.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:23 vm08 bash[17774]: cluster 2026-03-09T18:20:23.256239+0000 mon.a (mon.0) 244 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T18:20:23.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:23 vm08 bash[17774]: audit 2026-03-09T18:20:23.258345+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:20:23.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:23 vm08 bash[17774]: audit 2026-03-09T18:20:23.262998+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:20:24.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:23 vm00 bash[22468]: cluster 2026-03-09T18:20:22.378724+0000 mgr.y (mgr.14152) 48 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:24.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:23 vm00 bash[22468]: audit 2026-03-09T18:20:22.798553+0000 mgr.y (mgr.14152) 49 : audit [DBG] from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:20:24.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:23 vm00 bash[22468]: audit 2026-03-09T18:20:22.799995+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:20:24.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:23 vm00 bash[22468]: audit 2026-03-09T18:20:22.801867+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:20:24.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:23 vm00 bash[22468]: audit 2026-03-09T18:20:22.802455+0000 mon.a (mon.0) 242 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:24.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:23 vm00 bash[22468]: audit 2026-03-09T18:20:23.256196+0000 mon.a (mon.0) 243 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/4034633438,v1:192.168.123.100:6803/4034633438]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T18:20:24.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:23 vm00 bash[22468]: cluster 2026-03-09T18:20:23.256239+0000 mon.a (mon.0) 244 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T18:20:24.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:23 vm00 bash[22468]: audit 2026-03-09T18:20:23.258345+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:20:24.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:23 vm00 bash[22468]: audit 2026-03-09T18:20:23.262998+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:20:24.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:23 vm00 bash[17468]: cluster 2026-03-09T18:20:22.378724+0000 mgr.y (mgr.14152) 48 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:24.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:23 vm00 bash[17468]: audit 2026-03-09T18:20:22.798553+0000 mgr.y (mgr.14152) 49 : audit [DBG] from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:20:24.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:23 vm00 bash[17468]: audit 2026-03-09T18:20:22.799995+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:20:24.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:23 vm00 bash[17468]: audit 2026-03-09T18:20:22.801867+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:20:24.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:23 vm00 bash[17468]: audit 2026-03-09T18:20:22.802455+0000 mon.a (mon.0) 242 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:24.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:23 vm00 bash[17468]: audit 2026-03-09T18:20:23.256196+0000 mon.a (mon.0) 243 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/4034633438,v1:192.168.123.100:6803/4034633438]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T18:20:24.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:23 vm00 bash[17468]: cluster 2026-03-09T18:20:23.256239+0000 mon.a (mon.0) 244 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T18:20:24.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:23 vm00 bash[17468]: audit 2026-03-09T18:20:23.258345+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:20:24.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:23 vm00 bash[17468]: audit 2026-03-09T18:20:23.262998+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:20:24.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:24 vm08 bash[17774]: cluster 2026-03-09T18:20:22.289256+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:20:24.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:24 vm08 bash[17774]: cluster 2026-03-09T18:20:22.289417+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:20:24.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:24 vm08 bash[17774]: audit 2026-03-09T18:20:24.260736+0000 mon.a (mon.0) 247 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:20:24.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:24 vm08 bash[17774]: cluster 2026-03-09T18:20:24.263534+0000 mon.a (mon.0) 248 : cluster [INF] osd.0 [v2:192.168.123.100:6802/4034633438,v1:192.168.123.100:6803/4034633438] boot 2026-03-09T18:20:24.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:24 vm08 bash[17774]: cluster 2026-03-09T18:20:24.263914+0000 mon.a (mon.0) 249 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T18:20:24.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:24 vm08 bash[17774]: audit 2026-03-09T18:20:24.264345+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:20:25.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:24 vm00 bash[22468]: cluster 2026-03-09T18:20:22.289256+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:20:25.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:24 vm00 bash[22468]: cluster 2026-03-09T18:20:22.289417+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:20:25.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:24 vm00 bash[22468]: audit 2026-03-09T18:20:24.260736+0000 mon.a (mon.0) 247 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:20:25.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:24 vm00 bash[22468]: cluster 2026-03-09T18:20:24.263534+0000 mon.a (mon.0) 248 : cluster [INF] osd.0 [v2:192.168.123.100:6802/4034633438,v1:192.168.123.100:6803/4034633438] boot 2026-03-09T18:20:25.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:24 vm00 bash[22468]: cluster 2026-03-09T18:20:24.263914+0000 mon.a (mon.0) 249 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T18:20:25.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:24 vm00 bash[22468]: audit 2026-03-09T18:20:24.264345+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:20:25.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:24 vm00 bash[17468]: cluster 2026-03-09T18:20:22.289256+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:20:25.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:24 vm00 bash[17468]: cluster 2026-03-09T18:20:22.289417+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:20:25.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:24 vm00 bash[17468]: audit 2026-03-09T18:20:24.260736+0000 mon.a (mon.0) 247 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:20:25.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:24 vm00 bash[17468]: cluster 2026-03-09T18:20:24.263534+0000 mon.a (mon.0) 248 : cluster [INF] osd.0 [v2:192.168.123.100:6802/4034633438,v1:192.168.123.100:6803/4034633438] boot 2026-03-09T18:20:25.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:24 vm00 bash[17468]: cluster 2026-03-09T18:20:24.263914+0000 mon.a (mon.0) 249 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T18:20:25.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:24 vm00 bash[17468]: audit 2026-03-09T18:20:24.264345+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:20:25.943 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:25 vm00 bash[22468]: cluster 2026-03-09T18:20:24.378963+0000 mgr.y (mgr.14152) 50 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:25.944 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:25 vm00 bash[17468]: cluster 2026-03-09T18:20:24.378963+0000 mgr.y (mgr.14152) 50 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:25.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:25 vm08 bash[17774]: cluster 2026-03-09T18:20:24.378963+0000 mgr.y (mgr.14152) 50 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:20:27.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:26 vm08 bash[17774]: cephadm 2026-03-09T18:20:25.944214+0000 mgr.y (mgr.14152) 51 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T18:20:27.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:26 vm08 bash[17774]: audit 2026-03-09T18:20:25.950297+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:27.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:26 vm08 bash[17774]: audit 2026-03-09T18:20:25.952107+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:20:27.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:26 vm08 bash[17774]: audit 2026-03-09T18:20:25.956468+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:27.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:26 vm08 bash[17774]: cluster 2026-03-09T18:20:26.271091+0000 mon.a (mon.0) 254 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T18:20:27.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:26 vm08 bash[17774]: cluster 2026-03-09T18:20:26.379207+0000 mgr.y (mgr.14152) 52 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:20:27.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:26 vm08 bash[17774]: audit 2026-03-09T18:20:26.936236+0000 mon.a (mon.0) 255 : audit [INF] from='client.? 192.168.123.100:0/2440395567' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9fc1e6b3-451c-497e-a994-131046179fb9"}]: dispatch 2026-03-09T18:20:27.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:26 vm08 bash[17774]: audit 2026-03-09T18:20:26.944083+0000 mon.a (mon.0) 256 : audit [INF] from='client.? 192.168.123.100:0/2440395567' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9fc1e6b3-451c-497e-a994-131046179fb9"}]': finished 2026-03-09T18:20:27.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:26 vm08 bash[17774]: cluster 2026-03-09T18:20:26.944166+0000 mon.a (mon.0) 257 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T18:20:27.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:26 vm08 bash[17774]: audit 2026-03-09T18:20:26.944217+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:20:27.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:26 vm00 bash[17468]: cephadm 2026-03-09T18:20:25.944214+0000 mgr.y (mgr.14152) 51 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T18:20:27.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:26 vm00 bash[17468]: audit 2026-03-09T18:20:25.950297+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:27.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:26 vm00 bash[17468]: audit 2026-03-09T18:20:25.952107+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:20:27.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:26 vm00 bash[17468]: audit 2026-03-09T18:20:25.956468+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:27.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:26 vm00 bash[17468]: cluster 2026-03-09T18:20:26.271091+0000 mon.a (mon.0) 254 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T18:20:27.326 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:26 vm00 bash[17468]: cluster 2026-03-09T18:20:26.379207+0000 mgr.y (mgr.14152) 52 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:20:27.326 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:26 vm00 bash[17468]: audit 2026-03-09T18:20:26.936236+0000 mon.a (mon.0) 255 : audit [INF] from='client.? 192.168.123.100:0/2440395567' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9fc1e6b3-451c-497e-a994-131046179fb9"}]: dispatch 2026-03-09T18:20:27.326 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:26 vm00 bash[17468]: audit 2026-03-09T18:20:26.944083+0000 mon.a (mon.0) 256 : audit [INF] from='client.? 192.168.123.100:0/2440395567' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9fc1e6b3-451c-497e-a994-131046179fb9"}]': finished 2026-03-09T18:20:27.326 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:26 vm00 bash[17468]: cluster 2026-03-09T18:20:26.944166+0000 mon.a (mon.0) 257 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T18:20:27.326 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:26 vm00 bash[17468]: audit 2026-03-09T18:20:26.944217+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:20:27.326 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:26 vm00 bash[22468]: cephadm 2026-03-09T18:20:25.944214+0000 mgr.y (mgr.14152) 51 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T18:20:27.326 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:26 vm00 bash[22468]: audit 2026-03-09T18:20:25.950297+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:27.326 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:26 vm00 bash[22468]: audit 2026-03-09T18:20:25.952107+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:20:27.326 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:26 vm00 bash[22468]: audit 2026-03-09T18:20:25.956468+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:27.326 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:26 vm00 bash[22468]: cluster 2026-03-09T18:20:26.271091+0000 mon.a (mon.0) 254 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T18:20:27.326 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:26 vm00 bash[22468]: cluster 2026-03-09T18:20:26.379207+0000 mgr.y (mgr.14152) 52 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:20:27.326 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:26 vm00 bash[22468]: audit 2026-03-09T18:20:26.936236+0000 mon.a (mon.0) 255 : audit [INF] from='client.? 192.168.123.100:0/2440395567' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9fc1e6b3-451c-497e-a994-131046179fb9"}]: dispatch 2026-03-09T18:20:27.326 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:26 vm00 bash[22468]: audit 2026-03-09T18:20:26.944083+0000 mon.a (mon.0) 256 : audit [INF] from='client.? 192.168.123.100:0/2440395567' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9fc1e6b3-451c-497e-a994-131046179fb9"}]': finished 2026-03-09T18:20:27.326 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:26 vm00 bash[22468]: cluster 2026-03-09T18:20:26.944166+0000 mon.a (mon.0) 257 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T18:20:27.326 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:26 vm00 bash[22468]: audit 2026-03-09T18:20:26.944217+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:20:28.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:28 vm00 bash[17468]: audit 2026-03-09T18:20:27.570879+0000 mon.c (mon.1) 3 : audit [DBG] from='client.? 192.168.123.100:0/1735502521' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:20:28.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:28 vm00 bash[22468]: audit 2026-03-09T18:20:27.570879+0000 mon.c (mon.1) 3 : audit [DBG] from='client.? 192.168.123.100:0/1735502521' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:20:28.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:28 vm08 bash[17774]: audit 2026-03-09T18:20:27.570879+0000 mon.c (mon.1) 3 : audit [DBG] from='client.? 192.168.123.100:0/1735502521' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:20:29.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:29 vm00 bash[17468]: cluster 2026-03-09T18:20:28.379461+0000 mgr.y (mgr.14152) 53 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:20:29.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:29 vm00 bash[22468]: cluster 2026-03-09T18:20:28.379461+0000 mgr.y (mgr.14152) 53 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:20:29.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:29 vm08 bash[17774]: cluster 2026-03-09T18:20:28.379461+0000 mgr.y (mgr.14152) 53 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:20:31.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:31 vm08 bash[17774]: cluster 2026-03-09T18:20:30.379731+0000 mgr.y (mgr.14152) 54 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:20:31.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:31 vm00 bash[17468]: cluster 2026-03-09T18:20:30.379731+0000 mgr.y (mgr.14152) 54 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:20:31.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:31 vm00 bash[22468]: cluster 2026-03-09T18:20:30.379731+0000 mgr.y (mgr.14152) 54 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:20:33.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:33 vm00 bash[22468]: cluster 2026-03-09T18:20:32.379980+0000 mgr.y (mgr.14152) 55 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:20:33.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:33 vm00 bash[22468]: audit 2026-03-09T18:20:33.148266+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T18:20:33.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:33 vm00 bash[22468]: audit 2026-03-09T18:20:33.148731+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:33.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:33 vm00 bash[17468]: cluster 2026-03-09T18:20:32.379980+0000 mgr.y (mgr.14152) 55 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:20:33.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:33 vm00 bash[17468]: audit 2026-03-09T18:20:33.148266+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T18:20:33.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:33 vm00 bash[17468]: audit 2026-03-09T18:20:33.148731+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:33.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:33 vm08 bash[17774]: cluster 2026-03-09T18:20:32.379980+0000 mgr.y (mgr.14152) 55 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:20:33.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:33 vm08 bash[17774]: audit 2026-03-09T18:20:33.148266+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T18:20:33.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:33 vm08 bash[17774]: audit 2026-03-09T18:20:33.148731+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:34.011 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:33 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:20:34.012 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:20:33 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:20:34.012 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:33 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:20:34.012 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:20:33 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:20:34.384 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:20:34 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:20:34.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:34 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:20:34.384 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:20:34 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:20:34.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:34 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:20:34.707 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:34 vm00 bash[22468]: cephadm 2026-03-09T18:20:33.149081+0000 mgr.y (mgr.14152) 56 : cephadm [INF] Deploying daemon osd.1 on vm00 2026-03-09T18:20:34.707 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:34 vm00 bash[22468]: audit 2026-03-09T18:20:34.114838+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:34.707 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:34 vm00 bash[22468]: audit 2026-03-09T18:20:34.142192+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:34.707 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:34 vm00 bash[22468]: audit 2026-03-09T18:20:34.143251+0000 mon.a (mon.0) 263 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:34.707 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:34 vm00 bash[22468]: audit 2026-03-09T18:20:34.145461+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:34.707 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:34 vm00 bash[17468]: cephadm 2026-03-09T18:20:33.149081+0000 mgr.y (mgr.14152) 56 : cephadm [INF] Deploying daemon osd.1 on vm00 2026-03-09T18:20:34.707 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:34 vm00 bash[17468]: audit 2026-03-09T18:20:34.114838+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:34.707 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:34 vm00 bash[17468]: audit 2026-03-09T18:20:34.142192+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:34.707 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:34 vm00 bash[17468]: audit 2026-03-09T18:20:34.143251+0000 mon.a (mon.0) 263 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:34.707 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:34 vm00 bash[17468]: audit 2026-03-09T18:20:34.145461+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:34.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:34 vm08 bash[17774]: cephadm 2026-03-09T18:20:33.149081+0000 mgr.y (mgr.14152) 56 : cephadm [INF] Deploying daemon osd.1 on vm00 2026-03-09T18:20:34.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:34 vm08 bash[17774]: audit 2026-03-09T18:20:34.114838+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:34.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:34 vm08 bash[17774]: audit 2026-03-09T18:20:34.142192+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:34.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:34 vm08 bash[17774]: audit 2026-03-09T18:20:34.143251+0000 mon.a (mon.0) 263 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:34.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:34 vm08 bash[17774]: audit 2026-03-09T18:20:34.145461+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:35.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:35 vm00 bash[17468]: cluster 2026-03-09T18:20:34.380264+0000 mgr.y (mgr.14152) 57 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:20:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:35 vm00 bash[22468]: cluster 2026-03-09T18:20:34.380264+0000 mgr.y (mgr.14152) 57 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:20:35.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:35 vm08 bash[17774]: cluster 2026-03-09T18:20:34.380264+0000 mgr.y (mgr.14152) 57 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:20:37.117 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:37 vm00 bash[17468]: cluster 2026-03-09T18:20:36.380485+0000 mgr.y (mgr.14152) 58 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:20:37.117 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:37 vm00 bash[22468]: cluster 2026-03-09T18:20:36.380485+0000 mgr.y (mgr.14152) 58 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:20:37.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:37 vm08 bash[17774]: cluster 2026-03-09T18:20:36.380485+0000 mgr.y (mgr.14152) 58 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:20:37.678 INFO:teuthology.orchestra.run.vm00.stdout:Created osd(s) 1 on host 'vm00' 2026-03-09T18:20:37.761 DEBUG:teuthology.orchestra.run.vm00:osd.1> sudo journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.1.service 2026-03-09T18:20:37.762 INFO:tasks.cephadm:Deploying osd.2 on vm00 with /dev/vdc... 2026-03-09T18:20:37.762 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- lvm zap /dev/vdc 2026-03-09T18:20:38.169 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:38 vm00 bash[17468]: audit 2026-03-09T18:20:37.160880+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:38.169 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:38 vm00 bash[17468]: audit 2026-03-09T18:20:37.176554+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:38.169 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:38 vm00 bash[17468]: audit 2026-03-09T18:20:37.264217+0000 mon.c (mon.1) 4 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/3881919578,v1:192.168.123.100:6811/3881919578]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:20:38.169 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:38 vm00 bash[17468]: audit 2026-03-09T18:20:37.264820+0000 mon.a (mon.0) 267 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:20:38.169 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:38 vm00 bash[17468]: audit 2026-03-09T18:20:37.673067+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:38.169 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:38 vm00 bash[17468]: audit 2026-03-09T18:20:37.693471+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:38.169 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:38 vm00 bash[17468]: audit 2026-03-09T18:20:37.694539+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:38.169 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:38 vm00 bash[17468]: audit 2026-03-09T18:20:37.695028+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:38.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:38 vm08 bash[17774]: audit 2026-03-09T18:20:37.160880+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:38.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:38 vm08 bash[17774]: audit 2026-03-09T18:20:37.176554+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:38.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:38 vm08 bash[17774]: audit 2026-03-09T18:20:37.264217+0000 mon.c (mon.1) 4 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/3881919578,v1:192.168.123.100:6811/3881919578]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:20:38.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:38 vm08 bash[17774]: audit 2026-03-09T18:20:37.264820+0000 mon.a (mon.0) 267 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:20:38.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:38 vm08 bash[17774]: audit 2026-03-09T18:20:37.673067+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:38.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:38 vm08 bash[17774]: audit 2026-03-09T18:20:37.693471+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:38.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:38 vm08 bash[17774]: audit 2026-03-09T18:20:37.694539+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:38.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:38 vm08 bash[17774]: audit 2026-03-09T18:20:37.695028+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:38.531 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:38 vm00 bash[22468]: audit 2026-03-09T18:20:37.160880+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:38.532 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:38 vm00 bash[22468]: audit 2026-03-09T18:20:37.176554+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:38.532 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:38 vm00 bash[22468]: audit 2026-03-09T18:20:37.264217+0000 mon.c (mon.1) 4 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/3881919578,v1:192.168.123.100:6811/3881919578]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:20:38.532 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:38 vm00 bash[22468]: audit 2026-03-09T18:20:37.264820+0000 mon.a (mon.0) 267 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:20:38.532 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:38 vm00 bash[22468]: audit 2026-03-09T18:20:37.673067+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:38.532 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:38 vm00 bash[22468]: audit 2026-03-09T18:20:37.693471+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:38.532 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:38 vm00 bash[22468]: audit 2026-03-09T18:20:37.694539+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:38.532 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:38 vm00 bash[22468]: audit 2026-03-09T18:20:37.695028+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:38.562 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:20:38.577 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph orch daemon add osd vm00:/dev/vdc 2026-03-09T18:20:39.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:39 vm00 bash[17468]: audit 2026-03-09T18:20:38.180419+0000 mon.a (mon.0) 272 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T18:20:39.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:39 vm00 bash[17468]: cluster 2026-03-09T18:20:38.180577+0000 mon.a (mon.0) 273 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T18:20:39.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:39 vm00 bash[17468]: audit 2026-03-09T18:20:38.181039+0000 mon.c (mon.1) 5 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/3881919578,v1:192.168.123.100:6811/3881919578]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:20:39.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:39 vm00 bash[17468]: audit 2026-03-09T18:20:38.181102+0000 mon.a (mon.0) 274 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:20:39.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:39 vm00 bash[17468]: audit 2026-03-09T18:20:38.181716+0000 mon.a (mon.0) 275 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:20:39.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:39 vm00 bash[17468]: cluster 2026-03-09T18:20:38.380751+0000 mgr.y (mgr.14152) 59 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:20:39.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:39 vm00 bash[17468]: audit 2026-03-09T18:20:39.094268+0000 mon.a (mon.0) 276 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:20:39.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:39 vm00 bash[17468]: audit 2026-03-09T18:20:39.095550+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:20:39.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:39 vm00 bash[17468]: audit 2026-03-09T18:20:39.096009+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:39.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:39 vm00 bash[22468]: audit 2026-03-09T18:20:38.180419+0000 mon.a (mon.0) 272 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T18:20:39.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:39 vm00 bash[22468]: cluster 2026-03-09T18:20:38.180577+0000 mon.a (mon.0) 273 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T18:20:39.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:39 vm00 bash[22468]: audit 2026-03-09T18:20:38.181039+0000 mon.c (mon.1) 5 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/3881919578,v1:192.168.123.100:6811/3881919578]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:20:39.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:39 vm00 bash[22468]: audit 2026-03-09T18:20:38.181102+0000 mon.a (mon.0) 274 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:20:39.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:39 vm00 bash[22468]: audit 2026-03-09T18:20:38.181716+0000 mon.a (mon.0) 275 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:20:39.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:39 vm00 bash[22468]: cluster 2026-03-09T18:20:38.380751+0000 mgr.y (mgr.14152) 59 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:20:39.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:39 vm00 bash[22468]: audit 2026-03-09T18:20:39.094268+0000 mon.a (mon.0) 276 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:20:39.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:39 vm00 bash[22468]: audit 2026-03-09T18:20:39.095550+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:20:39.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:39 vm00 bash[22468]: audit 2026-03-09T18:20:39.096009+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:39.385 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:20:39 vm00 bash[28319]: debug 2026-03-09T18:20:39.185+0000 7f817b7f4700 -1 osd.1 0 waiting for initial osdmap 2026-03-09T18:20:39.385 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:20:39 vm00 bash[28319]: debug 2026-03-09T18:20:39.205+0000 7f817818f700 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:20:39.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:39 vm08 bash[17774]: audit 2026-03-09T18:20:38.180419+0000 mon.a (mon.0) 272 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T18:20:39.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:39 vm08 bash[17774]: cluster 2026-03-09T18:20:38.180577+0000 mon.a (mon.0) 273 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T18:20:39.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:39 vm08 bash[17774]: audit 2026-03-09T18:20:38.181039+0000 mon.c (mon.1) 5 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/3881919578,v1:192.168.123.100:6811/3881919578]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:20:39.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:39 vm08 bash[17774]: audit 2026-03-09T18:20:38.181102+0000 mon.a (mon.0) 274 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:20:39.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:39 vm08 bash[17774]: audit 2026-03-09T18:20:38.181716+0000 mon.a (mon.0) 275 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:20:39.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:39 vm08 bash[17774]: cluster 2026-03-09T18:20:38.380751+0000 mgr.y (mgr.14152) 59 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:20:39.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:39 vm08 bash[17774]: audit 2026-03-09T18:20:39.094268+0000 mon.a (mon.0) 276 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:20:39.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:39 vm08 bash[17774]: audit 2026-03-09T18:20:39.095550+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:20:39.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:39 vm08 bash[17774]: audit 2026-03-09T18:20:39.096009+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:40.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:40 vm08 bash[17774]: audit 2026-03-09T18:20:39.092728+0000 mgr.y (mgr.14152) 60 : audit [DBG] from='client.24149 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:20:40.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:40 vm08 bash[17774]: audit 2026-03-09T18:20:39.183093+0000 mon.a (mon.0) 279 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T18:20:40.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:40 vm08 bash[17774]: cluster 2026-03-09T18:20:39.183318+0000 mon.a (mon.0) 280 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T18:20:40.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:40 vm08 bash[17774]: audit 2026-03-09T18:20:39.190098+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:20:40.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:40 vm08 bash[17774]: audit 2026-03-09T18:20:40.186465+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:20:40.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:40 vm00 bash[22468]: audit 2026-03-09T18:20:39.092728+0000 mgr.y (mgr.14152) 60 : audit [DBG] from='client.24149 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:20:40.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:40 vm00 bash[22468]: audit 2026-03-09T18:20:39.183093+0000 mon.a (mon.0) 279 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T18:20:40.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:40 vm00 bash[22468]: cluster 2026-03-09T18:20:39.183318+0000 mon.a (mon.0) 280 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T18:20:40.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:40 vm00 bash[22468]: audit 2026-03-09T18:20:39.190098+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:20:40.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:40 vm00 bash[22468]: audit 2026-03-09T18:20:40.186465+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:20:40.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:40 vm00 bash[17468]: audit 2026-03-09T18:20:39.092728+0000 mgr.y (mgr.14152) 60 : audit [DBG] from='client.24149 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:20:40.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:40 vm00 bash[17468]: audit 2026-03-09T18:20:39.183093+0000 mon.a (mon.0) 279 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T18:20:40.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:40 vm00 bash[17468]: cluster 2026-03-09T18:20:39.183318+0000 mon.a (mon.0) 280 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T18:20:40.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:40 vm00 bash[17468]: audit 2026-03-09T18:20:39.190098+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:20:40.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:40 vm00 bash[17468]: audit 2026-03-09T18:20:40.186465+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:20:41.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:41 vm00 bash[22468]: cluster 2026-03-09T18:20:38.302334+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:20:41.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:41 vm00 bash[22468]: cluster 2026-03-09T18:20:38.302446+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:20:41.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:41 vm00 bash[22468]: cluster 2026-03-09T18:20:40.200354+0000 mon.a (mon.0) 283 : cluster [INF] osd.1 [v2:192.168.123.100:6810/3881919578,v1:192.168.123.100:6811/3881919578] boot 2026-03-09T18:20:41.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:41 vm00 bash[22468]: cluster 2026-03-09T18:20:40.200504+0000 mon.a (mon.0) 284 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T18:20:41.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:41 vm00 bash[22468]: audit 2026-03-09T18:20:40.201401+0000 mon.a (mon.0) 285 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:20:41.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:41 vm00 bash[22468]: cluster 2026-03-09T18:20:40.381004+0000 mgr.y (mgr.14152) 61 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 4.9 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:20:41.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:41 vm00 bash[22468]: audit 2026-03-09T18:20:40.438539+0000 mon.a (mon.0) 286 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:20:41.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:41 vm00 bash[22468]: audit 2026-03-09T18:20:40.446402+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:20:41.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:41 vm00 bash[17468]: cluster 2026-03-09T18:20:38.302334+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:20:41.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:41 vm00 bash[17468]: cluster 2026-03-09T18:20:38.302446+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:20:41.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:41 vm00 bash[17468]: cluster 2026-03-09T18:20:40.200354+0000 mon.a (mon.0) 283 : cluster [INF] osd.1 [v2:192.168.123.100:6810/3881919578,v1:192.168.123.100:6811/3881919578] boot 2026-03-09T18:20:41.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:41 vm00 bash[17468]: cluster 2026-03-09T18:20:40.200504+0000 mon.a (mon.0) 284 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T18:20:41.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:41 vm00 bash[17468]: audit 2026-03-09T18:20:40.201401+0000 mon.a (mon.0) 285 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:20:41.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:41 vm00 bash[17468]: cluster 2026-03-09T18:20:40.381004+0000 mgr.y (mgr.14152) 61 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 4.9 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:20:41.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:41 vm00 bash[17468]: audit 2026-03-09T18:20:40.438539+0000 mon.a (mon.0) 286 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:20:41.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:41 vm00 bash[17468]: audit 2026-03-09T18:20:40.446402+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:20:41.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:41 vm08 bash[17774]: cluster 2026-03-09T18:20:38.302334+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:20:41.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:41 vm08 bash[17774]: cluster 2026-03-09T18:20:38.302446+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:20:41.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:41 vm08 bash[17774]: cluster 2026-03-09T18:20:40.200354+0000 mon.a (mon.0) 283 : cluster [INF] osd.1 [v2:192.168.123.100:6810/3881919578,v1:192.168.123.100:6811/3881919578] boot 2026-03-09T18:20:41.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:41 vm08 bash[17774]: cluster 2026-03-09T18:20:40.200504+0000 mon.a (mon.0) 284 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T18:20:41.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:41 vm08 bash[17774]: audit 2026-03-09T18:20:40.201401+0000 mon.a (mon.0) 285 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:20:41.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:41 vm08 bash[17774]: cluster 2026-03-09T18:20:40.381004+0000 mgr.y (mgr.14152) 61 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 4.9 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:20:41.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:41 vm08 bash[17774]: audit 2026-03-09T18:20:40.438539+0000 mon.a (mon.0) 286 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:20:41.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:41 vm08 bash[17774]: audit 2026-03-09T18:20:40.446402+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:20:42.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:42 vm00 bash[22468]: cluster 2026-03-09T18:20:41.222256+0000 mon.a (mon.0) 288 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T18:20:42.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:42 vm00 bash[22468]: audit 2026-03-09T18:20:42.015902+0000 mon.a (mon.0) 289 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b6754d4f-0b5b-4d48-8415-b590ff7d2cdb"}]: dispatch 2026-03-09T18:20:42.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:42 vm00 bash[22468]: audit 2026-03-09T18:20:42.017253+0000 mon.b (mon.2) 8 : audit [INF] from='client.? 192.168.123.100:0/2244934346' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b6754d4f-0b5b-4d48-8415-b590ff7d2cdb"}]: dispatch 2026-03-09T18:20:42.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:42 vm00 bash[22468]: audit 2026-03-09T18:20:42.022988+0000 mon.a (mon.0) 290 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b6754d4f-0b5b-4d48-8415-b590ff7d2cdb"}]': finished 2026-03-09T18:20:42.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:42 vm00 bash[22468]: cluster 2026-03-09T18:20:42.023108+0000 mon.a (mon.0) 291 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T18:20:42.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:42 vm00 bash[22468]: audit 2026-03-09T18:20:42.023267+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:20:42.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:42 vm00 bash[17468]: cluster 2026-03-09T18:20:41.222256+0000 mon.a (mon.0) 288 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T18:20:42.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:42 vm00 bash[17468]: audit 2026-03-09T18:20:42.015902+0000 mon.a (mon.0) 289 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b6754d4f-0b5b-4d48-8415-b590ff7d2cdb"}]: dispatch 2026-03-09T18:20:42.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:42 vm00 bash[17468]: audit 2026-03-09T18:20:42.017253+0000 mon.b (mon.2) 8 : audit [INF] from='client.? 192.168.123.100:0/2244934346' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b6754d4f-0b5b-4d48-8415-b590ff7d2cdb"}]: dispatch 2026-03-09T18:20:42.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:42 vm00 bash[17468]: audit 2026-03-09T18:20:42.022988+0000 mon.a (mon.0) 290 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b6754d4f-0b5b-4d48-8415-b590ff7d2cdb"}]': finished 2026-03-09T18:20:42.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:42 vm00 bash[17468]: cluster 2026-03-09T18:20:42.023108+0000 mon.a (mon.0) 291 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T18:20:42.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:42 vm00 bash[17468]: audit 2026-03-09T18:20:42.023267+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:20:42.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:42 vm08 bash[17774]: cluster 2026-03-09T18:20:41.222256+0000 mon.a (mon.0) 288 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T18:20:42.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:42 vm08 bash[17774]: audit 2026-03-09T18:20:42.015902+0000 mon.a (mon.0) 289 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b6754d4f-0b5b-4d48-8415-b590ff7d2cdb"}]: dispatch 2026-03-09T18:20:42.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:42 vm08 bash[17774]: audit 2026-03-09T18:20:42.017253+0000 mon.b (mon.2) 8 : audit [INF] from='client.? 192.168.123.100:0/2244934346' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b6754d4f-0b5b-4d48-8415-b590ff7d2cdb"}]: dispatch 2026-03-09T18:20:42.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:42 vm08 bash[17774]: audit 2026-03-09T18:20:42.022988+0000 mon.a (mon.0) 290 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b6754d4f-0b5b-4d48-8415-b590ff7d2cdb"}]': finished 2026-03-09T18:20:42.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:42 vm08 bash[17774]: cluster 2026-03-09T18:20:42.023108+0000 mon.a (mon.0) 291 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T18:20:42.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:42 vm08 bash[17774]: audit 2026-03-09T18:20:42.023267+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:20:43.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:43 vm00 bash[22468]: cluster 2026-03-09T18:20:42.381225+0000 mgr.y (mgr.14152) 62 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:20:43.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:43 vm00 bash[22468]: audit 2026-03-09T18:20:42.674629+0000 mon.a (mon.0) 293 : audit [DBG] from='client.? 192.168.123.100:0/3352313790' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:20:43.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:43 vm00 bash[17468]: cluster 2026-03-09T18:20:42.381225+0000 mgr.y (mgr.14152) 62 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:20:43.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:43 vm00 bash[17468]: audit 2026-03-09T18:20:42.674629+0000 mon.a (mon.0) 293 : audit [DBG] from='client.? 192.168.123.100:0/3352313790' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:20:43.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:43 vm08 bash[17774]: cluster 2026-03-09T18:20:42.381225+0000 mgr.y (mgr.14152) 62 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:20:43.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:43 vm08 bash[17774]: audit 2026-03-09T18:20:42.674629+0000 mon.a (mon.0) 293 : audit [DBG] from='client.? 192.168.123.100:0/3352313790' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:20:45.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:45 vm08 bash[17774]: cluster 2026-03-09T18:20:44.381467+0000 mgr.y (mgr.14152) 63 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:20:45.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:45 vm00 bash[22468]: cluster 2026-03-09T18:20:44.381467+0000 mgr.y (mgr.14152) 63 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:20:45.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:45 vm00 bash[17468]: cluster 2026-03-09T18:20:44.381467+0000 mgr.y (mgr.14152) 63 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:20:48.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:47 vm00 bash[17468]: cluster 2026-03-09T18:20:46.381729+0000 mgr.y (mgr.14152) 64 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:20:48.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:47 vm00 bash[17468]: cephadm 2026-03-09T18:20:46.755633+0000 mgr.y (mgr.14152) 65 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T18:20:48.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:47 vm00 bash[17468]: audit 2026-03-09T18:20:46.762301+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:48.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:47 vm00 bash[17468]: audit 2026-03-09T18:20:46.764439+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:20:48.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:47 vm00 bash[17468]: audit 2026-03-09T18:20:46.769522+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:48.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:47 vm00 bash[22468]: cluster 2026-03-09T18:20:46.381729+0000 mgr.y (mgr.14152) 64 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:20:48.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:47 vm00 bash[22468]: cephadm 2026-03-09T18:20:46.755633+0000 mgr.y (mgr.14152) 65 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T18:20:48.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:47 vm00 bash[22468]: audit 2026-03-09T18:20:46.762301+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:48.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:47 vm00 bash[22468]: audit 2026-03-09T18:20:46.764439+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:20:48.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:47 vm00 bash[22468]: audit 2026-03-09T18:20:46.769522+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:48.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:47 vm08 bash[17774]: cluster 2026-03-09T18:20:46.381729+0000 mgr.y (mgr.14152) 64 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:20:48.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:47 vm08 bash[17774]: cephadm 2026-03-09T18:20:46.755633+0000 mgr.y (mgr.14152) 65 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T18:20:48.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:47 vm08 bash[17774]: audit 2026-03-09T18:20:46.762301+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:48.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:47 vm08 bash[17774]: audit 2026-03-09T18:20:46.764439+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:20:48.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:47 vm08 bash[17774]: audit 2026-03-09T18:20:46.769522+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:48.774 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:48 vm00 bash[17468]: audit 2026-03-09T18:20:48.296711+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T18:20:48.774 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:48 vm00 bash[17468]: audit 2026-03-09T18:20:48.297287+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:49.054 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:48 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:20:49.054 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:20:48 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:20:49.054 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:48 vm00 bash[22468]: audit 2026-03-09T18:20:48.296711+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T18:20:49.054 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:48 vm00 bash[22468]: audit 2026-03-09T18:20:48.297287+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:49.054 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:48 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:20:49.055 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:20:48 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:20:49.055 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:20:48 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:20:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:48 vm08 bash[17774]: audit 2026-03-09T18:20:48.296711+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T18:20:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:48 vm08 bash[17774]: audit 2026-03-09T18:20:48.297287+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:49.333 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:49 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:20:49.333 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:20:49 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:20:49.333 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:49 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:20:49.333 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:20:49 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:20:49.334 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:20:49 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:20:50.068 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:49 vm00 bash[17468]: cephadm 2026-03-09T18:20:48.297752+0000 mgr.y (mgr.14152) 66 : cephadm [INF] Deploying daemon osd.2 on vm00 2026-03-09T18:20:50.068 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:49 vm00 bash[17468]: cluster 2026-03-09T18:20:48.382004+0000 mgr.y (mgr.14152) 67 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:20:50.068 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:49 vm00 bash[17468]: audit 2026-03-09T18:20:49.322063+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:50.068 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:49 vm00 bash[17468]: audit 2026-03-09T18:20:49.323724+0000 mon.a (mon.0) 300 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:50.068 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:49 vm00 bash[17468]: audit 2026-03-09T18:20:49.328440+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:50.068 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:49 vm00 bash[17468]: audit 2026-03-09T18:20:49.336615+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:50.068 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:49 vm00 bash[22468]: cephadm 2026-03-09T18:20:48.297752+0000 mgr.y (mgr.14152) 66 : cephadm [INF] Deploying daemon osd.2 on vm00 2026-03-09T18:20:50.068 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:49 vm00 bash[22468]: cluster 2026-03-09T18:20:48.382004+0000 mgr.y (mgr.14152) 67 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:20:50.068 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:49 vm00 bash[22468]: audit 2026-03-09T18:20:49.322063+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:50.068 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:49 vm00 bash[22468]: audit 2026-03-09T18:20:49.323724+0000 mon.a (mon.0) 300 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:50.068 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:49 vm00 bash[22468]: audit 2026-03-09T18:20:49.328440+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:50.068 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:49 vm00 bash[22468]: audit 2026-03-09T18:20:49.336615+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:50.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:49 vm08 bash[17774]: cephadm 2026-03-09T18:20:48.297752+0000 mgr.y (mgr.14152) 66 : cephadm [INF] Deploying daemon osd.2 on vm00 2026-03-09T18:20:50.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:49 vm08 bash[17774]: cluster 2026-03-09T18:20:48.382004+0000 mgr.y (mgr.14152) 67 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:20:50.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:49 vm08 bash[17774]: audit 2026-03-09T18:20:49.322063+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:50.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:49 vm08 bash[17774]: audit 2026-03-09T18:20:49.323724+0000 mon.a (mon.0) 300 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:50.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:49 vm08 bash[17774]: audit 2026-03-09T18:20:49.328440+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:50.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:49 vm08 bash[17774]: audit 2026-03-09T18:20:49.336615+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:52.047 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:51 vm00 bash[17468]: cluster 2026-03-09T18:20:50.382222+0000 mgr.y (mgr.14152) 68 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:20:52.047 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:51 vm00 bash[22468]: cluster 2026-03-09T18:20:50.382222+0000 mgr.y (mgr.14152) 68 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:20:52.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:51 vm08 bash[17774]: cluster 2026-03-09T18:20:50.382222+0000 mgr.y (mgr.14152) 68 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:20:52.790 INFO:teuthology.orchestra.run.vm00.stdout:Created osd(s) 2 on host 'vm00' 2026-03-09T18:20:52.862 DEBUG:teuthology.orchestra.run.vm00:osd.2> sudo journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.2.service 2026-03-09T18:20:52.862 INFO:tasks.cephadm:Deploying osd.3 on vm00 with /dev/vdb... 2026-03-09T18:20:52.863 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- lvm zap /dev/vdb 2026-03-09T18:20:53.514 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:20:53.528 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph orch daemon add osd vm00:/dev/vdb 2026-03-09T18:20:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:53 vm00 bash[17468]: audit 2026-03-09T18:20:52.342931+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:53 vm00 bash[17468]: cluster 2026-03-09T18:20:52.382454+0000 mgr.y (mgr.14152) 69 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:20:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:53 vm00 bash[17468]: audit 2026-03-09T18:20:52.511119+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:53 vm00 bash[17468]: audit 2026-03-09T18:20:52.681395+0000 mon.a (mon.0) 305 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/1380134913,v1:192.168.123.100:6819/1380134913]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T18:20:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:53 vm00 bash[17468]: audit 2026-03-09T18:20:52.786993+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:53 vm00 bash[17468]: audit 2026-03-09T18:20:52.828189+0000 mon.a (mon.0) 307 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:53 vm00 bash[17468]: audit 2026-03-09T18:20:52.829004+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:53 vm00 bash[17468]: audit 2026-03-09T18:20:52.829511+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:53.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:53 vm00 bash[22468]: audit 2026-03-09T18:20:52.342931+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:53.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:53 vm00 bash[22468]: cluster 2026-03-09T18:20:52.382454+0000 mgr.y (mgr.14152) 69 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:20:53.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:53 vm00 bash[22468]: audit 2026-03-09T18:20:52.511119+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:53.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:53 vm00 bash[22468]: audit 2026-03-09T18:20:52.681395+0000 mon.a (mon.0) 305 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/1380134913,v1:192.168.123.100:6819/1380134913]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T18:20:53.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:53 vm00 bash[22468]: audit 2026-03-09T18:20:52.786993+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:53.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:53 vm00 bash[22468]: audit 2026-03-09T18:20:52.828189+0000 mon.a (mon.0) 307 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:53.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:53 vm00 bash[22468]: audit 2026-03-09T18:20:52.829004+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:53.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:53 vm00 bash[22468]: audit 2026-03-09T18:20:52.829511+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:53.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:53 vm08 bash[17774]: audit 2026-03-09T18:20:52.342931+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:53.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:53 vm08 bash[17774]: cluster 2026-03-09T18:20:52.382454+0000 mgr.y (mgr.14152) 69 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:20:53.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:53 vm08 bash[17774]: audit 2026-03-09T18:20:52.511119+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:53.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:53 vm08 bash[17774]: audit 2026-03-09T18:20:52.681395+0000 mon.a (mon.0) 305 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/1380134913,v1:192.168.123.100:6819/1380134913]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T18:20:53.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:53 vm08 bash[17774]: audit 2026-03-09T18:20:52.786993+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:53.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:53 vm08 bash[17774]: audit 2026-03-09T18:20:52.828189+0000 mon.a (mon.0) 307 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:20:53.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:53 vm08 bash[17774]: audit 2026-03-09T18:20:52.829004+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:53.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:53 vm08 bash[17774]: audit 2026-03-09T18:20:52.829511+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:20:54.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:54 vm00 bash[22468]: audit 2026-03-09T18:20:53.522522+0000 mon.a (mon.0) 310 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/1380134913,v1:192.168.123.100:6819/1380134913]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T18:20:54.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:54 vm00 bash[22468]: cluster 2026-03-09T18:20:53.522620+0000 mon.a (mon.0) 311 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T18:20:54.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:54 vm00 bash[22468]: audit 2026-03-09T18:20:53.522966+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:20:54.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:54 vm00 bash[22468]: audit 2026-03-09T18:20:53.525647+0000 mon.a (mon.0) 313 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/1380134913,v1:192.168.123.100:6819/1380134913]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:20:54.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:54 vm00 bash[22468]: audit 2026-03-09T18:20:53.986026+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:20:54.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:54 vm00 bash[22468]: audit 2026-03-09T18:20:53.987313+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:20:54.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:54 vm00 bash[22468]: audit 2026-03-09T18:20:53.987678+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:54.884 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:20:54 vm00 bash[31464]: debug 2026-03-09T18:20:54.525+0000 7f0940cfb700 -1 osd.2 0 waiting for initial osdmap 2026-03-09T18:20:54.884 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:20:54 vm00 bash[31464]: debug 2026-03-09T18:20:54.533+0000 7f093ae91700 -1 osd.2 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:20:54.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:54 vm00 bash[17468]: audit 2026-03-09T18:20:53.522522+0000 mon.a (mon.0) 310 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/1380134913,v1:192.168.123.100:6819/1380134913]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T18:20:54.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:54 vm00 bash[17468]: cluster 2026-03-09T18:20:53.522620+0000 mon.a (mon.0) 311 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T18:20:54.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:54 vm00 bash[17468]: audit 2026-03-09T18:20:53.522966+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:20:54.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:54 vm00 bash[17468]: audit 2026-03-09T18:20:53.525647+0000 mon.a (mon.0) 313 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/1380134913,v1:192.168.123.100:6819/1380134913]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:20:54.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:54 vm00 bash[17468]: audit 2026-03-09T18:20:53.986026+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:20:54.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:54 vm00 bash[17468]: audit 2026-03-09T18:20:53.987313+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:20:54.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:54 vm00 bash[17468]: audit 2026-03-09T18:20:53.987678+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:54.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:54 vm08 bash[17774]: audit 2026-03-09T18:20:53.522522+0000 mon.a (mon.0) 310 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/1380134913,v1:192.168.123.100:6819/1380134913]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T18:20:54.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:54 vm08 bash[17774]: cluster 2026-03-09T18:20:53.522620+0000 mon.a (mon.0) 311 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T18:20:54.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:54 vm08 bash[17774]: audit 2026-03-09T18:20:53.522966+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:20:54.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:54 vm08 bash[17774]: audit 2026-03-09T18:20:53.525647+0000 mon.a (mon.0) 313 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/1380134913,v1:192.168.123.100:6819/1380134913]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:20:54.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:54 vm08 bash[17774]: audit 2026-03-09T18:20:53.986026+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:20:54.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:54 vm08 bash[17774]: audit 2026-03-09T18:20:53.987313+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:20:54.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:54 vm08 bash[17774]: audit 2026-03-09T18:20:53.987678+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:20:55.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:55 vm00 bash[17468]: audit 2026-03-09T18:20:53.984461+0000 mgr.y (mgr.14152) 70 : audit [DBG] from='client.14280 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:20:55.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:55 vm00 bash[17468]: cluster 2026-03-09T18:20:54.382737+0000 mgr.y (mgr.14152) 71 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:20:55.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:55 vm00 bash[17468]: audit 2026-03-09T18:20:54.523584+0000 mon.a (mon.0) 317 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/1380134913,v1:192.168.123.100:6819/1380134913]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T18:20:55.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:55 vm00 bash[17468]: cluster 2026-03-09T18:20:54.523948+0000 mon.a (mon.0) 318 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T18:20:55.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:55 vm00 bash[17468]: audit 2026-03-09T18:20:54.527458+0000 mon.a (mon.0) 319 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:20:55.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:55 vm00 bash[17468]: audit 2026-03-09T18:20:54.533744+0000 mon.a (mon.0) 320 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:20:55.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:55 vm00 bash[17468]: audit 2026-03-09T18:20:55.527696+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:20:55.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:55 vm00 bash[22468]: audit 2026-03-09T18:20:53.984461+0000 mgr.y (mgr.14152) 70 : audit [DBG] from='client.14280 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:20:55.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:55 vm00 bash[22468]: cluster 2026-03-09T18:20:54.382737+0000 mgr.y (mgr.14152) 71 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:20:55.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:55 vm00 bash[22468]: audit 2026-03-09T18:20:54.523584+0000 mon.a (mon.0) 317 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/1380134913,v1:192.168.123.100:6819/1380134913]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T18:20:55.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:55 vm00 bash[22468]: cluster 2026-03-09T18:20:54.523948+0000 mon.a (mon.0) 318 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T18:20:55.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:55 vm00 bash[22468]: audit 2026-03-09T18:20:54.527458+0000 mon.a (mon.0) 319 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:20:55.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:55 vm00 bash[22468]: audit 2026-03-09T18:20:54.533744+0000 mon.a (mon.0) 320 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:20:55.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:55 vm00 bash[22468]: audit 2026-03-09T18:20:55.527696+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:20:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:55 vm08 bash[17774]: audit 2026-03-09T18:20:53.984461+0000 mgr.y (mgr.14152) 70 : audit [DBG] from='client.14280 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:20:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:55 vm08 bash[17774]: cluster 2026-03-09T18:20:54.382737+0000 mgr.y (mgr.14152) 71 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:20:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:55 vm08 bash[17774]: audit 2026-03-09T18:20:54.523584+0000 mon.a (mon.0) 317 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/1380134913,v1:192.168.123.100:6819/1380134913]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T18:20:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:55 vm08 bash[17774]: cluster 2026-03-09T18:20:54.523948+0000 mon.a (mon.0) 318 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T18:20:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:55 vm08 bash[17774]: audit 2026-03-09T18:20:54.527458+0000 mon.a (mon.0) 319 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:20:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:55 vm08 bash[17774]: audit 2026-03-09T18:20:54.533744+0000 mon.a (mon.0) 320 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:20:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:55 vm08 bash[17774]: audit 2026-03-09T18:20:55.527696+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:20:56.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:56 vm00 bash[22468]: cluster 2026-03-09T18:20:53.686056+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:20:56.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:56 vm00 bash[22468]: cluster 2026-03-09T18:20:53.686139+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:20:56.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:56 vm00 bash[22468]: cluster 2026-03-09T18:20:55.538022+0000 mon.a (mon.0) 322 : cluster [INF] osd.2 [v2:192.168.123.100:6818/1380134913,v1:192.168.123.100:6819/1380134913] boot 2026-03-09T18:20:56.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:56 vm00 bash[22468]: cluster 2026-03-09T18:20:55.538111+0000 mon.a (mon.0) 323 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T18:20:56.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:56 vm00 bash[22468]: audit 2026-03-09T18:20:55.538779+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:20:56.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:56 vm00 bash[17468]: cluster 2026-03-09T18:20:53.686056+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:20:56.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:56 vm00 bash[17468]: cluster 2026-03-09T18:20:53.686139+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:20:56.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:56 vm00 bash[17468]: cluster 2026-03-09T18:20:55.538022+0000 mon.a (mon.0) 322 : cluster [INF] osd.2 [v2:192.168.123.100:6818/1380134913,v1:192.168.123.100:6819/1380134913] boot 2026-03-09T18:20:56.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:56 vm00 bash[17468]: cluster 2026-03-09T18:20:55.538111+0000 mon.a (mon.0) 323 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T18:20:56.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:56 vm00 bash[17468]: audit 2026-03-09T18:20:55.538779+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:20:56.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:56 vm08 bash[17774]: cluster 2026-03-09T18:20:53.686056+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:20:56.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:56 vm08 bash[17774]: cluster 2026-03-09T18:20:53.686139+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:20:56.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:56 vm08 bash[17774]: cluster 2026-03-09T18:20:55.538022+0000 mon.a (mon.0) 322 : cluster [INF] osd.2 [v2:192.168.123.100:6818/1380134913,v1:192.168.123.100:6819/1380134913] boot 2026-03-09T18:20:56.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:56 vm08 bash[17774]: cluster 2026-03-09T18:20:55.538111+0000 mon.a (mon.0) 323 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T18:20:56.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:56 vm08 bash[17774]: audit 2026-03-09T18:20:55.538779+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:20:57.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:57 vm00 bash[22468]: cluster 2026-03-09T18:20:56.382954+0000 mgr.y (mgr.14152) 72 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 9.9 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:20:57.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:57 vm00 bash[22468]: audit 2026-03-09T18:20:56.423603+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32}]: dispatch 2026-03-09T18:20:57.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:57 vm00 bash[22468]: audit 2026-03-09T18:20:56.575235+0000 mon.a (mon.0) 326 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32}]': finished 2026-03-09T18:20:57.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:57 vm00 bash[22468]: cluster 2026-03-09T18:20:56.575410+0000 mon.a (mon.0) 327 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T18:20:57.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:57 vm00 bash[22468]: audit 2026-03-09T18:20:56.578820+0000 mon.a (mon.0) 328 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T18:20:57.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:57 vm00 bash[22468]: audit 2026-03-09T18:20:57.243211+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:57.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:57 vm00 bash[22468]: audit 2026-03-09T18:20:57.244934+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:20:57.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:57 vm00 bash[22468]: audit 2026-03-09T18:20:57.249451+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:57.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:57 vm00 bash[22468]: audit 2026-03-09T18:20:57.578368+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T18:20:57.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:57 vm00 bash[22468]: cluster 2026-03-09T18:20:57.578533+0000 mon.a (mon.0) 333 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T18:20:57.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:57 vm00 bash[17468]: cluster 2026-03-09T18:20:56.382954+0000 mgr.y (mgr.14152) 72 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 9.9 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:20:57.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:57 vm00 bash[17468]: audit 2026-03-09T18:20:56.423603+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32}]: dispatch 2026-03-09T18:20:57.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:57 vm00 bash[17468]: audit 2026-03-09T18:20:56.575235+0000 mon.a (mon.0) 326 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32}]': finished 2026-03-09T18:20:57.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:57 vm00 bash[17468]: cluster 2026-03-09T18:20:56.575410+0000 mon.a (mon.0) 327 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T18:20:57.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:57 vm00 bash[17468]: audit 2026-03-09T18:20:56.578820+0000 mon.a (mon.0) 328 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T18:20:57.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:57 vm00 bash[17468]: audit 2026-03-09T18:20:57.243211+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:57.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:57 vm00 bash[17468]: audit 2026-03-09T18:20:57.244934+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:20:57.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:57 vm00 bash[17468]: audit 2026-03-09T18:20:57.249451+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:57.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:57 vm00 bash[17468]: audit 2026-03-09T18:20:57.578368+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T18:20:57.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:57 vm00 bash[17468]: cluster 2026-03-09T18:20:57.578533+0000 mon.a (mon.0) 333 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T18:20:57.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:57 vm08 bash[17774]: cluster 2026-03-09T18:20:56.382954+0000 mgr.y (mgr.14152) 72 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 9.9 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:20:57.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:57 vm08 bash[17774]: audit 2026-03-09T18:20:56.423603+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32}]: dispatch 2026-03-09T18:20:57.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:57 vm08 bash[17774]: audit 2026-03-09T18:20:56.575235+0000 mon.a (mon.0) 326 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32}]': finished 2026-03-09T18:20:57.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:57 vm08 bash[17774]: cluster 2026-03-09T18:20:56.575410+0000 mon.a (mon.0) 327 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T18:20:57.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:57 vm08 bash[17774]: audit 2026-03-09T18:20:56.578820+0000 mon.a (mon.0) 328 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T18:20:57.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:57 vm08 bash[17774]: audit 2026-03-09T18:20:57.243211+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:57.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:57 vm08 bash[17774]: audit 2026-03-09T18:20:57.244934+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:20:57.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:57 vm08 bash[17774]: audit 2026-03-09T18:20:57.249451+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:20:57.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:57 vm08 bash[17774]: audit 2026-03-09T18:20:57.578368+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T18:20:57.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:57 vm08 bash[17774]: cluster 2026-03-09T18:20:57.578533+0000 mon.a (mon.0) 333 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T18:20:58.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:58 vm00 bash[22468]: audit 2026-03-09T18:20:58.103048+0000 mon.c (mon.1) 6 : audit [INF] from='client.? 192.168.123.100:0/11426059' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "04bdb6c0-c351-4b7e-b364-865748cfae11"}]: dispatch 2026-03-09T18:20:58.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:58 vm00 bash[22468]: audit 2026-03-09T18:20:58.103735+0000 mon.a (mon.0) 334 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "04bdb6c0-c351-4b7e-b364-865748cfae11"}]: dispatch 2026-03-09T18:20:58.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:58 vm00 bash[22468]: audit 2026-03-09T18:20:58.112107+0000 mon.a (mon.0) 335 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "04bdb6c0-c351-4b7e-b364-865748cfae11"}]': finished 2026-03-09T18:20:58.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:58 vm00 bash[22468]: cluster 2026-03-09T18:20:58.112250+0000 mon.a (mon.0) 336 : cluster [DBG] osdmap e21: 4 total, 3 up, 4 in 2026-03-09T18:20:58.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:58 vm00 bash[22468]: audit 2026-03-09T18:20:58.112402+0000 mon.a (mon.0) 337 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:20:58.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:58 vm00 bash[17468]: audit 2026-03-09T18:20:58.103048+0000 mon.c (mon.1) 6 : audit [INF] from='client.? 192.168.123.100:0/11426059' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "04bdb6c0-c351-4b7e-b364-865748cfae11"}]: dispatch 2026-03-09T18:20:58.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:58 vm00 bash[17468]: audit 2026-03-09T18:20:58.103735+0000 mon.a (mon.0) 334 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "04bdb6c0-c351-4b7e-b364-865748cfae11"}]: dispatch 2026-03-09T18:20:58.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:58 vm00 bash[17468]: audit 2026-03-09T18:20:58.112107+0000 mon.a (mon.0) 335 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "04bdb6c0-c351-4b7e-b364-865748cfae11"}]': finished 2026-03-09T18:20:58.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:58 vm00 bash[17468]: cluster 2026-03-09T18:20:58.112250+0000 mon.a (mon.0) 336 : cluster [DBG] osdmap e21: 4 total, 3 up, 4 in 2026-03-09T18:20:58.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:58 vm00 bash[17468]: audit 2026-03-09T18:20:58.112402+0000 mon.a (mon.0) 337 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:20:58.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:58 vm08 bash[17774]: audit 2026-03-09T18:20:58.103048+0000 mon.c (mon.1) 6 : audit [INF] from='client.? 192.168.123.100:0/11426059' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "04bdb6c0-c351-4b7e-b364-865748cfae11"}]: dispatch 2026-03-09T18:20:58.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:58 vm08 bash[17774]: audit 2026-03-09T18:20:58.103735+0000 mon.a (mon.0) 334 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "04bdb6c0-c351-4b7e-b364-865748cfae11"}]: dispatch 2026-03-09T18:20:58.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:58 vm08 bash[17774]: audit 2026-03-09T18:20:58.112107+0000 mon.a (mon.0) 335 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "04bdb6c0-c351-4b7e-b364-865748cfae11"}]': finished 2026-03-09T18:20:58.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:58 vm08 bash[17774]: cluster 2026-03-09T18:20:58.112250+0000 mon.a (mon.0) 336 : cluster [DBG] osdmap e21: 4 total, 3 up, 4 in 2026-03-09T18:20:58.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:58 vm08 bash[17774]: audit 2026-03-09T18:20:58.112402+0000 mon.a (mon.0) 337 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:20:59.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:59 vm00 bash[22468]: cluster 2026-03-09T18:20:58.383203+0000 mgr.y (mgr.14152) 73 : cluster [DBG] pgmap v50: 1 pgs: 1 unknown; 0 B data, 15 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:20:59.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:59 vm00 bash[22468]: audit 2026-03-09T18:20:58.859732+0000 mon.a (mon.0) 338 : audit [DBG] from='client.? 192.168.123.100:0/2994793511' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:20:59.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:59 vm00 bash[22468]: audit 2026-03-09T18:20:59.105739+0000 mon.a (mon.0) 339 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T18:20:59.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:59 vm00 bash[22468]: audit 2026-03-09T18:20:59.258461+0000 mon.a (mon.0) 340 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T18:20:59.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:59 vm00 bash[22468]: audit 2026-03-09T18:20:59.258667+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:20:59.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:59 vm00 bash[22468]: audit 2026-03-09T18:20:59.260426+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:59.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:59 vm00 bash[22468]: audit 2026-03-09T18:20:59.260491+0000 mon.a (mon.0) 343 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:20:59.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:59 vm00 bash[22468]: audit 2026-03-09T18:20:59.267649+0000 mon.c (mon.1) 7 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T18:20:59.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:59 vm00 bash[22468]: audit 2026-03-09T18:20:59.276392+0000 mon.a (mon.0) 344 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:20:59.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:59 vm00 bash[22468]: audit 2026-03-09T18:20:59.276478+0000 mon.a (mon.0) 345 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:59.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:59 vm00 bash[22468]: audit 2026-03-09T18:20:59.276540+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:20:59.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:59 vm00 bash[22468]: audit 2026-03-09T18:20:59.424129+0000 mon.c (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T18:20:59.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:59 vm00 bash[22468]: cluster 2026-03-09T18:20:59.430459+0000 mon.a (mon.0) 347 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-09T18:20:59.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:59 vm00 bash[22468]: audit 2026-03-09T18:20:59.430659+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:20:59.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:59 vm00 bash[22468]: audit 2026-03-09T18:20:59.430761+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:59.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:59 vm00 bash[22468]: audit 2026-03-09T18:20:59.430824+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:20:59.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:59 vm00 bash[22468]: audit 2026-03-09T18:20:59.430927+0000 mon.a (mon.0) 351 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:20:59.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:59 vm00 bash[22468]: audit 2026-03-09T18:20:59.431221+0000 mon.b (mon.2) 9 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T18:20:59.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:20:59 vm00 bash[22468]: audit 2026-03-09T18:20:59.583684+0000 mon.b (mon.2) 10 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T18:20:59.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:59 vm00 bash[17468]: cluster 2026-03-09T18:20:58.383203+0000 mgr.y (mgr.14152) 73 : cluster [DBG] pgmap v50: 1 pgs: 1 unknown; 0 B data, 15 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:20:59.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:59 vm00 bash[17468]: audit 2026-03-09T18:20:58.859732+0000 mon.a (mon.0) 338 : audit [DBG] from='client.? 192.168.123.100:0/2994793511' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:20:59.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:59 vm00 bash[17468]: audit 2026-03-09T18:20:59.105739+0000 mon.a (mon.0) 339 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T18:20:59.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:59 vm00 bash[17468]: audit 2026-03-09T18:20:59.258461+0000 mon.a (mon.0) 340 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T18:20:59.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:59 vm00 bash[17468]: audit 2026-03-09T18:20:59.258667+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:20:59.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:59 vm00 bash[17468]: audit 2026-03-09T18:20:59.260426+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:59.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:59 vm00 bash[17468]: audit 2026-03-09T18:20:59.260491+0000 mon.a (mon.0) 343 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:20:59.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:59 vm00 bash[17468]: audit 2026-03-09T18:20:59.267649+0000 mon.c (mon.1) 7 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T18:20:59.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:59 vm00 bash[17468]: audit 2026-03-09T18:20:59.276392+0000 mon.a (mon.0) 344 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:20:59.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:59 vm00 bash[17468]: audit 2026-03-09T18:20:59.276478+0000 mon.a (mon.0) 345 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:59.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:59 vm00 bash[17468]: audit 2026-03-09T18:20:59.276540+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:20:59.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:59 vm00 bash[17468]: audit 2026-03-09T18:20:59.424129+0000 mon.c (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T18:20:59.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:59 vm00 bash[17468]: cluster 2026-03-09T18:20:59.430459+0000 mon.a (mon.0) 347 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-09T18:20:59.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:59 vm00 bash[17468]: audit 2026-03-09T18:20:59.430659+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:20:59.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:59 vm00 bash[17468]: audit 2026-03-09T18:20:59.430761+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:59.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:59 vm00 bash[17468]: audit 2026-03-09T18:20:59.430824+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:20:59.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:59 vm00 bash[17468]: audit 2026-03-09T18:20:59.430927+0000 mon.a (mon.0) 351 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:20:59.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:59 vm00 bash[17468]: audit 2026-03-09T18:20:59.431221+0000 mon.b (mon.2) 9 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T18:20:59.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:20:59 vm00 bash[17468]: audit 2026-03-09T18:20:59.583684+0000 mon.b (mon.2) 10 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T18:20:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:59 vm08 bash[17774]: cluster 2026-03-09T18:20:58.383203+0000 mgr.y (mgr.14152) 73 : cluster [DBG] pgmap v50: 1 pgs: 1 unknown; 0 B data, 15 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:20:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:59 vm08 bash[17774]: audit 2026-03-09T18:20:58.859732+0000 mon.a (mon.0) 338 : audit [DBG] from='client.? 192.168.123.100:0/2994793511' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:20:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:59 vm08 bash[17774]: audit 2026-03-09T18:20:59.105739+0000 mon.a (mon.0) 339 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T18:20:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:59 vm08 bash[17774]: audit 2026-03-09T18:20:59.258461+0000 mon.a (mon.0) 340 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T18:20:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:59 vm08 bash[17774]: audit 2026-03-09T18:20:59.258667+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:20:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:59 vm08 bash[17774]: audit 2026-03-09T18:20:59.260426+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:59 vm08 bash[17774]: audit 2026-03-09T18:20:59.260491+0000 mon.a (mon.0) 343 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:20:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:59 vm08 bash[17774]: audit 2026-03-09T18:20:59.267649+0000 mon.c (mon.1) 7 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T18:20:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:59 vm08 bash[17774]: audit 2026-03-09T18:20:59.276392+0000 mon.a (mon.0) 344 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:20:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:59 vm08 bash[17774]: audit 2026-03-09T18:20:59.276478+0000 mon.a (mon.0) 345 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:59 vm08 bash[17774]: audit 2026-03-09T18:20:59.276540+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:20:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:59 vm08 bash[17774]: audit 2026-03-09T18:20:59.424129+0000 mon.c (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T18:20:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:59 vm08 bash[17774]: cluster 2026-03-09T18:20:59.430459+0000 mon.a (mon.0) 347 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-09T18:20:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:59 vm08 bash[17774]: audit 2026-03-09T18:20:59.430659+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:20:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:59 vm08 bash[17774]: audit 2026-03-09T18:20:59.430761+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:20:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:59 vm08 bash[17774]: audit 2026-03-09T18:20:59.430824+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:20:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:59 vm08 bash[17774]: audit 2026-03-09T18:20:59.430927+0000 mon.a (mon.0) 351 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:20:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:59 vm08 bash[17774]: audit 2026-03-09T18:20:59.431221+0000 mon.b (mon.2) 9 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T18:20:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:20:59 vm08 bash[17774]: audit 2026-03-09T18:20:59.583684+0000 mon.b (mon.2) 10 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T18:21:01.605 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:01 vm00 bash[17468]: cluster 2026-03-09T18:21:00.383466+0000 mgr.y (mgr.14152) 74 : cluster [DBG] pgmap v52: 1 pgs: 1 unknown; 0 B data, 16 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:21:01.605 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:01 vm00 bash[17468]: cluster 2026-03-09T18:21:00.606906+0000 mon.a (mon.0) 352 : cluster [DBG] mgrmap e15: y(active, since 80s), standbys: x 2026-03-09T18:21:01.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:01 vm00 bash[22468]: cluster 2026-03-09T18:21:00.383466+0000 mgr.y (mgr.14152) 74 : cluster [DBG] pgmap v52: 1 pgs: 1 unknown; 0 B data, 16 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:21:01.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:01 vm00 bash[22468]: cluster 2026-03-09T18:21:00.606906+0000 mon.a (mon.0) 352 : cluster [DBG] mgrmap e15: y(active, since 80s), standbys: x 2026-03-09T18:21:01.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:01 vm08 bash[17774]: cluster 2026-03-09T18:21:00.383466+0000 mgr.y (mgr.14152) 74 : cluster [DBG] pgmap v52: 1 pgs: 1 unknown; 0 B data, 16 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:21:01.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:01 vm08 bash[17774]: cluster 2026-03-09T18:21:00.606906+0000 mon.a (mon.0) 352 : cluster [DBG] mgrmap e15: y(active, since 80s), standbys: x 2026-03-09T18:21:03.871 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:03 vm00 bash[17468]: cluster 2026-03-09T18:21:02.383744+0000 mgr.y (mgr.14152) 75 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:21:03.872 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:03 vm00 bash[22468]: cluster 2026-03-09T18:21:02.383744+0000 mgr.y (mgr.14152) 75 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:21:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:03 vm08 bash[17774]: cluster 2026-03-09T18:21:02.383744+0000 mgr.y (mgr.14152) 75 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:21:04.677 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:04 vm00 bash[22468]: audit 2026-03-09T18:21:04.490447+0000 mon.a (mon.0) 353 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T18:21:04.678 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:04 vm00 bash[22468]: audit 2026-03-09T18:21:04.490961+0000 mon.a (mon.0) 354 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:04.678 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:04 vm00 bash[17468]: audit 2026-03-09T18:21:04.490447+0000 mon.a (mon.0) 353 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T18:21:04.678 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:04 vm00 bash[17468]: audit 2026-03-09T18:21:04.490961+0000 mon.a (mon.0) 354 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:04 vm08 bash[17774]: audit 2026-03-09T18:21:04.490447+0000 mon.a (mon.0) 353 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T18:21:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:04 vm08 bash[17774]: audit 2026-03-09T18:21:04.490961+0000 mon.a (mon.0) 354 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:05.305 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:05 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:05.305 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:21:05 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:05.305 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:05 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:05.305 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:21:05 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:05.306 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:21:05 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:05.306 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:21:05 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:05.622 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:05 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:05.622 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:21:05 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:05.622 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:05 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:05.622 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:21:05 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:05.622 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:21:05 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:05.622 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:21:05 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:05.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:05 vm00 bash[17468]: cluster 2026-03-09T18:21:04.384034+0000 mgr.y (mgr.14152) 76 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:21:05.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:05 vm00 bash[17468]: cephadm 2026-03-09T18:21:04.491416+0000 mgr.y (mgr.14152) 77 : cephadm [INF] Deploying daemon osd.3 on vm00 2026-03-09T18:21:05.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:05 vm00 bash[17468]: audit 2026-03-09T18:21:05.426965+0000 mon.a (mon.0) 355 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:05.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:05 vm00 bash[17468]: audit 2026-03-09T18:21:05.431039+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:21:05.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:05 vm00 bash[17468]: audit 2026-03-09T18:21:05.437450+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:05.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:05 vm00 bash[17468]: audit 2026-03-09T18:21:05.443444+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:21:05.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:05 vm00 bash[22468]: cluster 2026-03-09T18:21:04.384034+0000 mgr.y (mgr.14152) 76 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:21:05.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:05 vm00 bash[22468]: cephadm 2026-03-09T18:21:04.491416+0000 mgr.y (mgr.14152) 77 : cephadm [INF] Deploying daemon osd.3 on vm00 2026-03-09T18:21:05.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:05 vm00 bash[22468]: audit 2026-03-09T18:21:05.426965+0000 mon.a (mon.0) 355 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:05.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:05 vm00 bash[22468]: audit 2026-03-09T18:21:05.431039+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:21:05.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:05 vm00 bash[22468]: audit 2026-03-09T18:21:05.437450+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:05.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:05 vm00 bash[22468]: audit 2026-03-09T18:21:05.443444+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:21:05.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:05 vm08 bash[17774]: cluster 2026-03-09T18:21:04.384034+0000 mgr.y (mgr.14152) 76 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:21:05.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:05 vm08 bash[17774]: cephadm 2026-03-09T18:21:04.491416+0000 mgr.y (mgr.14152) 77 : cephadm [INF] Deploying daemon osd.3 on vm00 2026-03-09T18:21:05.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:05 vm08 bash[17774]: audit 2026-03-09T18:21:05.426965+0000 mon.a (mon.0) 355 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:05.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:05 vm08 bash[17774]: audit 2026-03-09T18:21:05.431039+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:21:05.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:05 vm08 bash[17774]: audit 2026-03-09T18:21:05.437450+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:05.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:05 vm08 bash[17774]: audit 2026-03-09T18:21:05.443444+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:21:07.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:07 vm00 bash[22468]: cluster 2026-03-09T18:21:06.384292+0000 mgr.y (mgr.14152) 78 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:21:07.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:07 vm00 bash[17468]: cluster 2026-03-09T18:21:06.384292+0000 mgr.y (mgr.14152) 78 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:21:07.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:07 vm08 bash[17774]: cluster 2026-03-09T18:21:06.384292+0000 mgr.y (mgr.14152) 78 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:21:09.053 INFO:teuthology.orchestra.run.vm00.stdout:Created osd(s) 3 on host 'vm00' 2026-03-09T18:21:09.151 DEBUG:teuthology.orchestra.run.vm00:osd.3> sudo journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.3.service 2026-03-09T18:21:09.152 INFO:tasks.cephadm:Deploying osd.4 on vm08 with /dev/vde... 2026-03-09T18:21:09.152 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- lvm zap /dev/vde 2026-03-09T18:21:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:09 vm08 bash[17774]: cluster 2026-03-09T18:21:08.384607+0000 mgr.y (mgr.14152) 79 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:21:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:09 vm08 bash[17774]: audit 2026-03-09T18:21:08.431713+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:09 vm08 bash[17774]: audit 2026-03-09T18:21:08.438440+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:09 vm08 bash[17774]: audit 2026-03-09T18:21:08.853900+0000 mon.c (mon.1) 9 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/51325005,v1:192.168.123.100:6827/51325005]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T18:21:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:09 vm08 bash[17774]: audit 2026-03-09T18:21:08.854280+0000 mon.a (mon.0) 361 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T18:21:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:09 vm08 bash[17774]: audit 2026-03-09T18:21:09.048132+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:09 vm08 bash[17774]: audit 2026-03-09T18:21:09.057367+0000 mon.a (mon.0) 363 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:21:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:09 vm08 bash[17774]: audit 2026-03-09T18:21:09.058177+0000 mon.a (mon.0) 364 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:09 vm08 bash[17774]: audit 2026-03-09T18:21:09.058711+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:21:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:09 vm00 bash[22468]: cluster 2026-03-09T18:21:08.384607+0000 mgr.y (mgr.14152) 79 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:21:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:09 vm00 bash[22468]: audit 2026-03-09T18:21:08.431713+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:09 vm00 bash[22468]: audit 2026-03-09T18:21:08.438440+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:09 vm00 bash[22468]: audit 2026-03-09T18:21:08.853900+0000 mon.c (mon.1) 9 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/51325005,v1:192.168.123.100:6827/51325005]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T18:21:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:09 vm00 bash[22468]: audit 2026-03-09T18:21:08.854280+0000 mon.a (mon.0) 361 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T18:21:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:09 vm00 bash[22468]: audit 2026-03-09T18:21:09.048132+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:09 vm00 bash[22468]: audit 2026-03-09T18:21:09.057367+0000 mon.a (mon.0) 363 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:21:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:09 vm00 bash[22468]: audit 2026-03-09T18:21:09.058177+0000 mon.a (mon.0) 364 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:09.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:09 vm00 bash[22468]: audit 2026-03-09T18:21:09.058711+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:21:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:09 vm00 bash[17468]: cluster 2026-03-09T18:21:08.384607+0000 mgr.y (mgr.14152) 79 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:21:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:09 vm00 bash[17468]: audit 2026-03-09T18:21:08.431713+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:09 vm00 bash[17468]: audit 2026-03-09T18:21:08.438440+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:09 vm00 bash[17468]: audit 2026-03-09T18:21:08.853900+0000 mon.c (mon.1) 9 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/51325005,v1:192.168.123.100:6827/51325005]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T18:21:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:09 vm00 bash[17468]: audit 2026-03-09T18:21:08.854280+0000 mon.a (mon.0) 361 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T18:21:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:09 vm00 bash[17468]: audit 2026-03-09T18:21:09.048132+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:09 vm00 bash[17468]: audit 2026-03-09T18:21:09.057367+0000 mon.a (mon.0) 363 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:21:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:09 vm00 bash[17468]: audit 2026-03-09T18:21:09.058177+0000 mon.a (mon.0) 364 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:09.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:09 vm00 bash[17468]: audit 2026-03-09T18:21:09.058711+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:21:09.904 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-09T18:21:09.912 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph orch daemon add osd vm08:/dev/vde 2026-03-09T18:21:10.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:10 vm00 bash[22468]: audit 2026-03-09T18:21:09.507426+0000 mon.a (mon.0) 366 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T18:21:10.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:10 vm00 bash[22468]: cluster 2026-03-09T18:21:09.507583+0000 mon.a (mon.0) 367 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T18:21:10.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:10 vm00 bash[22468]: audit 2026-03-09T18:21:09.507798+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:21:10.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:10 vm00 bash[22468]: audit 2026-03-09T18:21:09.508039+0000 mon.c (mon.1) 10 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/51325005,v1:192.168.123.100:6827/51325005]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:21:10.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:10 vm00 bash[22468]: audit 2026-03-09T18:21:09.508391+0000 mon.a (mon.0) 369 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:21:10.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:10 vm00 bash[22468]: audit 2026-03-09T18:21:10.339917+0000 mon.a (mon.0) 370 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:21:10.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:10 vm00 bash[22468]: audit 2026-03-09T18:21:10.341733+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:21:10.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:10 vm00 bash[22468]: audit 2026-03-09T18:21:10.342202+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:10.884 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:21:10 vm00 bash[34680]: debug 2026-03-09T18:21:10.521+0000 7faa5f0a3700 -1 osd.3 0 waiting for initial osdmap 2026-03-09T18:21:10.884 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:21:10 vm00 bash[34680]: debug 2026-03-09T18:21:10.529+0000 7faa59239700 -1 osd.3 24 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:21:10.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:10 vm00 bash[17468]: audit 2026-03-09T18:21:09.507426+0000 mon.a (mon.0) 366 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T18:21:10.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:10 vm00 bash[17468]: cluster 2026-03-09T18:21:09.507583+0000 mon.a (mon.0) 367 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T18:21:10.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:10 vm00 bash[17468]: audit 2026-03-09T18:21:09.507798+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:21:10.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:10 vm00 bash[17468]: audit 2026-03-09T18:21:09.508039+0000 mon.c (mon.1) 10 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/51325005,v1:192.168.123.100:6827/51325005]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:21:10.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:10 vm00 bash[17468]: audit 2026-03-09T18:21:09.508391+0000 mon.a (mon.0) 369 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:21:10.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:10 vm00 bash[17468]: audit 2026-03-09T18:21:10.339917+0000 mon.a (mon.0) 370 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:21:10.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:10 vm00 bash[17468]: audit 2026-03-09T18:21:10.341733+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:21:10.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:10 vm00 bash[17468]: audit 2026-03-09T18:21:10.342202+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:10.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:10 vm08 bash[17774]: audit 2026-03-09T18:21:09.507426+0000 mon.a (mon.0) 366 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T18:21:10.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:10 vm08 bash[17774]: cluster 2026-03-09T18:21:09.507583+0000 mon.a (mon.0) 367 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T18:21:10.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:10 vm08 bash[17774]: audit 2026-03-09T18:21:09.507798+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:21:10.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:10 vm08 bash[17774]: audit 2026-03-09T18:21:09.508039+0000 mon.c (mon.1) 10 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/51325005,v1:192.168.123.100:6827/51325005]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:21:10.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:10 vm08 bash[17774]: audit 2026-03-09T18:21:09.508391+0000 mon.a (mon.0) 369 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:21:10.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:10 vm08 bash[17774]: audit 2026-03-09T18:21:10.339917+0000 mon.a (mon.0) 370 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:21:10.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:10 vm08 bash[17774]: audit 2026-03-09T18:21:10.341733+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:21:10.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:10 vm08 bash[17774]: audit 2026-03-09T18:21:10.342202+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:11.794 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:11 vm00 bash[17468]: audit 2026-03-09T18:21:10.338502+0000 mgr.y (mgr.14152) 80 : audit [DBG] from='client.24200 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:21:11.794 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:11 vm00 bash[17468]: cluster 2026-03-09T18:21:10.384987+0000 mgr.y (mgr.14152) 81 : cluster [DBG] pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:21:11.794 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:11 vm00 bash[17468]: audit 2026-03-09T18:21:10.510179+0000 mon.a (mon.0) 373 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T18:21:11.794 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:11 vm00 bash[17468]: cluster 2026-03-09T18:21:10.510470+0000 mon.a (mon.0) 374 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T18:21:11.794 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:11 vm00 bash[17468]: audit 2026-03-09T18:21:10.511362+0000 mon.a (mon.0) 375 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:21:11.794 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:11 vm00 bash[17468]: audit 2026-03-09T18:21:10.525637+0000 mon.a (mon.0) 376 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:21:11.794 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:11 vm00 bash[22468]: audit 2026-03-09T18:21:10.338502+0000 mgr.y (mgr.14152) 80 : audit [DBG] from='client.24200 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:21:11.794 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:11 vm00 bash[22468]: cluster 2026-03-09T18:21:10.384987+0000 mgr.y (mgr.14152) 81 : cluster [DBG] pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:21:11.794 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:11 vm00 bash[22468]: audit 2026-03-09T18:21:10.510179+0000 mon.a (mon.0) 373 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T18:21:11.794 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:11 vm00 bash[22468]: cluster 2026-03-09T18:21:10.510470+0000 mon.a (mon.0) 374 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T18:21:11.794 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:11 vm00 bash[22468]: audit 2026-03-09T18:21:10.511362+0000 mon.a (mon.0) 375 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:21:11.794 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:11 vm00 bash[22468]: audit 2026-03-09T18:21:10.525637+0000 mon.a (mon.0) 376 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:21:11.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:11 vm08 bash[17774]: audit 2026-03-09T18:21:10.338502+0000 mgr.y (mgr.14152) 80 : audit [DBG] from='client.24200 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:21:11.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:11 vm08 bash[17774]: cluster 2026-03-09T18:21:10.384987+0000 mgr.y (mgr.14152) 81 : cluster [DBG] pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:21:11.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:11 vm08 bash[17774]: audit 2026-03-09T18:21:10.510179+0000 mon.a (mon.0) 373 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-09T18:21:11.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:11 vm08 bash[17774]: cluster 2026-03-09T18:21:10.510470+0000 mon.a (mon.0) 374 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T18:21:11.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:11 vm08 bash[17774]: audit 2026-03-09T18:21:10.511362+0000 mon.a (mon.0) 375 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:21:11.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:11 vm08 bash[17774]: audit 2026-03-09T18:21:10.525637+0000 mon.a (mon.0) 376 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:21:12.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:12 vm00 bash[22468]: cluster 2026-03-09T18:21:09.836560+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:21:12.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:12 vm00 bash[22468]: cluster 2026-03-09T18:21:09.836657+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:21:12.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:12 vm00 bash[22468]: audit 2026-03-09T18:21:11.515168+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:21:12.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:12 vm00 bash[22468]: cluster 2026-03-09T18:21:11.523509+0000 mon.a (mon.0) 378 : cluster [INF] osd.3 [v2:192.168.123.100:6826/51325005,v1:192.168.123.100:6827/51325005] boot 2026-03-09T18:21:12.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:12 vm00 bash[22468]: cluster 2026-03-09T18:21:11.523844+0000 mon.a (mon.0) 379 : cluster [DBG] osdmap e25: 4 total, 4 up, 4 in 2026-03-09T18:21:12.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:12 vm00 bash[22468]: audit 2026-03-09T18:21:11.525085+0000 mon.a (mon.0) 380 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:21:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:12 vm00 bash[17468]: cluster 2026-03-09T18:21:09.836560+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:21:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:12 vm00 bash[17468]: cluster 2026-03-09T18:21:09.836657+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:21:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:12 vm00 bash[17468]: audit 2026-03-09T18:21:11.515168+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:21:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:12 vm00 bash[17468]: cluster 2026-03-09T18:21:11.523509+0000 mon.a (mon.0) 378 : cluster [INF] osd.3 [v2:192.168.123.100:6826/51325005,v1:192.168.123.100:6827/51325005] boot 2026-03-09T18:21:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:12 vm00 bash[17468]: cluster 2026-03-09T18:21:11.523844+0000 mon.a (mon.0) 379 : cluster [DBG] osdmap e25: 4 total, 4 up, 4 in 2026-03-09T18:21:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:12 vm00 bash[17468]: audit 2026-03-09T18:21:11.525085+0000 mon.a (mon.0) 380 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:21:12.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:12 vm08 bash[17774]: cluster 2026-03-09T18:21:09.836560+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:21:12.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:12 vm08 bash[17774]: cluster 2026-03-09T18:21:09.836657+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:21:12.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:12 vm08 bash[17774]: audit 2026-03-09T18:21:11.515168+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:21:12.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:12 vm08 bash[17774]: cluster 2026-03-09T18:21:11.523509+0000 mon.a (mon.0) 378 : cluster [INF] osd.3 [v2:192.168.123.100:6826/51325005,v1:192.168.123.100:6827/51325005] boot 2026-03-09T18:21:12.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:12 vm08 bash[17774]: cluster 2026-03-09T18:21:11.523844+0000 mon.a (mon.0) 379 : cluster [DBG] osdmap e25: 4 total, 4 up, 4 in 2026-03-09T18:21:12.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:12 vm08 bash[17774]: audit 2026-03-09T18:21:11.525085+0000 mon.a (mon.0) 380 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:21:13.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:13 vm00 bash[22468]: cluster 2026-03-09T18:21:12.385436+0000 mgr.y (mgr.14152) 82 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:21:13.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:13 vm00 bash[22468]: cluster 2026-03-09T18:21:12.538451+0000 mon.a (mon.0) 381 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T18:21:13.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:13 vm00 bash[17468]: cluster 2026-03-09T18:21:12.385436+0000 mgr.y (mgr.14152) 82 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:21:13.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:13 vm00 bash[17468]: cluster 2026-03-09T18:21:12.538451+0000 mon.a (mon.0) 381 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T18:21:13.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:13 vm08 bash[17774]: cluster 2026-03-09T18:21:12.385436+0000 mgr.y (mgr.14152) 82 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T18:21:13.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:13 vm08 bash[17774]: cluster 2026-03-09T18:21:12.538451+0000 mon.a (mon.0) 381 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T18:21:14.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:14 vm00 bash[22468]: cephadm 2026-03-09T18:21:13.536911+0000 mgr.y (mgr.14152) 83 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T18:21:14.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:14 vm00 bash[22468]: audit 2026-03-09T18:21:13.548171+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:14.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:14 vm00 bash[22468]: audit 2026-03-09T18:21:13.549202+0000 mon.a (mon.0) 383 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:21:14.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:14 vm00 bash[22468]: audit 2026-03-09T18:21:13.554794+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:14.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:14 vm00 bash[22468]: audit 2026-03-09T18:21:13.829577+0000 mon.a (mon.0) 385 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "28dbafde-327a-4cb7-aaf4-8f0bed8a7a21"}]: dispatch 2026-03-09T18:21:14.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:14 vm00 bash[22468]: audit 2026-03-09T18:21:13.830834+0000 mon.b (mon.2) 11 : audit [INF] from='client.? 192.168.123.108:0/3320699417' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "28dbafde-327a-4cb7-aaf4-8f0bed8a7a21"}]: dispatch 2026-03-09T18:21:14.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:14 vm00 bash[22468]: audit 2026-03-09T18:21:14.004485+0000 mon.a (mon.0) 386 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "28dbafde-327a-4cb7-aaf4-8f0bed8a7a21"}]': finished 2026-03-09T18:21:14.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:14 vm00 bash[22468]: cluster 2026-03-09T18:21:14.004703+0000 mon.a (mon.0) 387 : cluster [DBG] osdmap e27: 5 total, 4 up, 5 in 2026-03-09T18:21:14.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:14 vm00 bash[22468]: audit 2026-03-09T18:21:14.004797+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:21:14.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:14 vm00 bash[17468]: cephadm 2026-03-09T18:21:13.536911+0000 mgr.y (mgr.14152) 83 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T18:21:14.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:14 vm00 bash[17468]: audit 2026-03-09T18:21:13.548171+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:14.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:14 vm00 bash[17468]: audit 2026-03-09T18:21:13.549202+0000 mon.a (mon.0) 383 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:21:14.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:14 vm00 bash[17468]: audit 2026-03-09T18:21:13.554794+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:14.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:14 vm00 bash[17468]: audit 2026-03-09T18:21:13.829577+0000 mon.a (mon.0) 385 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "28dbafde-327a-4cb7-aaf4-8f0bed8a7a21"}]: dispatch 2026-03-09T18:21:14.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:14 vm00 bash[17468]: audit 2026-03-09T18:21:13.830834+0000 mon.b (mon.2) 11 : audit [INF] from='client.? 192.168.123.108:0/3320699417' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "28dbafde-327a-4cb7-aaf4-8f0bed8a7a21"}]: dispatch 2026-03-09T18:21:14.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:14 vm00 bash[17468]: audit 2026-03-09T18:21:14.004485+0000 mon.a (mon.0) 386 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "28dbafde-327a-4cb7-aaf4-8f0bed8a7a21"}]': finished 2026-03-09T18:21:14.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:14 vm00 bash[17468]: cluster 2026-03-09T18:21:14.004703+0000 mon.a (mon.0) 387 : cluster [DBG] osdmap e27: 5 total, 4 up, 5 in 2026-03-09T18:21:14.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:14 vm00 bash[17468]: audit 2026-03-09T18:21:14.004797+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:21:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:14 vm08 bash[17774]: cephadm 2026-03-09T18:21:13.536911+0000 mgr.y (mgr.14152) 83 : cephadm [INF] Detected new or changed devices on vm00 2026-03-09T18:21:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:14 vm08 bash[17774]: audit 2026-03-09T18:21:13.548171+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:14 vm08 bash[17774]: audit 2026-03-09T18:21:13.549202+0000 mon.a (mon.0) 383 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:21:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:14 vm08 bash[17774]: audit 2026-03-09T18:21:13.554794+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:14 vm08 bash[17774]: audit 2026-03-09T18:21:13.829577+0000 mon.a (mon.0) 385 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "28dbafde-327a-4cb7-aaf4-8f0bed8a7a21"}]: dispatch 2026-03-09T18:21:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:14 vm08 bash[17774]: audit 2026-03-09T18:21:13.830834+0000 mon.b (mon.2) 11 : audit [INF] from='client.? 192.168.123.108:0/3320699417' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "28dbafde-327a-4cb7-aaf4-8f0bed8a7a21"}]: dispatch 2026-03-09T18:21:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:14 vm08 bash[17774]: audit 2026-03-09T18:21:14.004485+0000 mon.a (mon.0) 386 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "28dbafde-327a-4cb7-aaf4-8f0bed8a7a21"}]': finished 2026-03-09T18:21:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:14 vm08 bash[17774]: cluster 2026-03-09T18:21:14.004703+0000 mon.a (mon.0) 387 : cluster [DBG] osdmap e27: 5 total, 4 up, 5 in 2026-03-09T18:21:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:14 vm08 bash[17774]: audit 2026-03-09T18:21:14.004797+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:21:15.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:15 vm00 bash[22468]: cluster 2026-03-09T18:21:14.385723+0000 mgr.y (mgr.14152) 84 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:21:15.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:15 vm00 bash[22468]: audit 2026-03-09T18:21:14.796318+0000 mon.b (mon.2) 12 : audit [DBG] from='client.? 192.168.123.108:0/2248337238' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:21:15.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:15 vm00 bash[17468]: cluster 2026-03-09T18:21:14.385723+0000 mgr.y (mgr.14152) 84 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:21:15.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:15 vm00 bash[17468]: audit 2026-03-09T18:21:14.796318+0000 mon.b (mon.2) 12 : audit [DBG] from='client.? 192.168.123.108:0/2248337238' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:21:15.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:15 vm08 bash[17774]: cluster 2026-03-09T18:21:14.385723+0000 mgr.y (mgr.14152) 84 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:21:15.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:15 vm08 bash[17774]: audit 2026-03-09T18:21:14.796318+0000 mon.b (mon.2) 12 : audit [DBG] from='client.? 192.168.123.108:0/2248337238' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:21:17.465 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:17 vm08 bash[17774]: cluster 2026-03-09T18:21:16.385966+0000 mgr.y (mgr.14152) 85 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:21:17.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:17 vm00 bash[22468]: cluster 2026-03-09T18:21:16.385966+0000 mgr.y (mgr.14152) 85 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:21:17.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:17 vm00 bash[17468]: cluster 2026-03-09T18:21:16.385966+0000 mgr.y (mgr.14152) 85 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:21:19.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:19 vm08 bash[17774]: cluster 2026-03-09T18:21:18.386268+0000 mgr.y (mgr.14152) 86 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:21:19.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:19 vm00 bash[22468]: cluster 2026-03-09T18:21:18.386268+0000 mgr.y (mgr.14152) 86 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:21:19.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:19 vm00 bash[17468]: cluster 2026-03-09T18:21:18.386268+0000 mgr.y (mgr.14152) 86 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:21:21.448 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:21:21 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:21.448 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:21 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:21.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:21 vm08 bash[17774]: cluster 2026-03-09T18:21:20.386596+0000 mgr.y (mgr.14152) 87 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:21:21.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:21 vm08 bash[17774]: audit 2026-03-09T18:21:20.612256+0000 mon.a (mon.0) 389 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T18:21:21.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:21 vm08 bash[17774]: audit 2026-03-09T18:21:20.612844+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:21.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:21 vm08 bash[17774]: cephadm 2026-03-09T18:21:20.613265+0000 mgr.y (mgr.14152) 88 : cephadm [INF] Deploying daemon osd.4 on vm08 2026-03-09T18:21:21.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:21 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:21.725 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:21:21 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:21.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:21 vm00 bash[22468]: cluster 2026-03-09T18:21:20.386596+0000 mgr.y (mgr.14152) 87 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:21:21.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:21 vm00 bash[22468]: audit 2026-03-09T18:21:20.612256+0000 mon.a (mon.0) 389 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T18:21:21.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:21 vm00 bash[22468]: audit 2026-03-09T18:21:20.612844+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:21.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:21 vm00 bash[22468]: cephadm 2026-03-09T18:21:20.613265+0000 mgr.y (mgr.14152) 88 : cephadm [INF] Deploying daemon osd.4 on vm08 2026-03-09T18:21:21.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:21 vm00 bash[17468]: cluster 2026-03-09T18:21:20.386596+0000 mgr.y (mgr.14152) 87 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:21:21.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:21 vm00 bash[17468]: audit 2026-03-09T18:21:20.612256+0000 mon.a (mon.0) 389 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T18:21:21.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:21 vm00 bash[17468]: audit 2026-03-09T18:21:20.612844+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:21.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:21 vm00 bash[17468]: cephadm 2026-03-09T18:21:20.613265+0000 mgr.y (mgr.14152) 88 : cephadm [INF] Deploying daemon osd.4 on vm08 2026-03-09T18:21:22.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:22 vm00 bash[22468]: audit 2026-03-09T18:21:21.551060+0000 mon.a (mon.0) 391 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:22.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:22 vm00 bash[22468]: audit 2026-03-09T18:21:21.582893+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:21:22.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:22 vm00 bash[22468]: audit 2026-03-09T18:21:21.583844+0000 mon.a (mon.0) 393 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:22.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:22 vm00 bash[22468]: audit 2026-03-09T18:21:21.584403+0000 mon.a (mon.0) 394 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:21:22.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:22 vm00 bash[17468]: audit 2026-03-09T18:21:21.551060+0000 mon.a (mon.0) 391 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:22.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:22 vm00 bash[17468]: audit 2026-03-09T18:21:21.582893+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:21:22.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:22 vm00 bash[17468]: audit 2026-03-09T18:21:21.583844+0000 mon.a (mon.0) 393 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:22.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:22 vm00 bash[17468]: audit 2026-03-09T18:21:21.584403+0000 mon.a (mon.0) 394 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:21:22.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:22 vm08 bash[17774]: audit 2026-03-09T18:21:21.551060+0000 mon.a (mon.0) 391 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:22.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:22 vm08 bash[17774]: audit 2026-03-09T18:21:21.582893+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:21:22.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:22 vm08 bash[17774]: audit 2026-03-09T18:21:21.583844+0000 mon.a (mon.0) 393 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:22.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:22 vm08 bash[17774]: audit 2026-03-09T18:21:21.584403+0000 mon.a (mon.0) 394 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:21:23.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:23 vm00 bash[22468]: cluster 2026-03-09T18:21:22.386886+0000 mgr.y (mgr.14152) 89 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:21:23.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:23 vm00 bash[17468]: cluster 2026-03-09T18:21:22.386886+0000 mgr.y (mgr.14152) 89 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:21:23.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:23 vm08 bash[17774]: cluster 2026-03-09T18:21:22.386886+0000 mgr.y (mgr.14152) 89 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:21:24.854 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:24 vm08 bash[17774]: audit 2026-03-09T18:21:24.283777+0000 mon.a (mon.0) 395 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T18:21:24.854 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:24 vm08 bash[17774]: audit 2026-03-09T18:21:24.285120+0000 mon.b (mon.2) 13 : audit [INF] from='osd.4 [v2:192.168.123.108:6800/3738925586,v1:192.168.123.108:6801/3738925586]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T18:21:24.854 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:24 vm08 bash[17774]: audit 2026-03-09T18:21:24.491946+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:24.854 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:24 vm08 bash[17774]: audit 2026-03-09T18:21:24.497855+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:24.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:24 vm00 bash[22468]: audit 2026-03-09T18:21:24.283777+0000 mon.a (mon.0) 395 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T18:21:24.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:24 vm00 bash[22468]: audit 2026-03-09T18:21:24.285120+0000 mon.b (mon.2) 13 : audit [INF] from='osd.4 [v2:192.168.123.108:6800/3738925586,v1:192.168.123.108:6801/3738925586]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T18:21:24.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:24 vm00 bash[22468]: audit 2026-03-09T18:21:24.491946+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:24.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:24 vm00 bash[22468]: audit 2026-03-09T18:21:24.497855+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:24.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:24 vm00 bash[17468]: audit 2026-03-09T18:21:24.283777+0000 mon.a (mon.0) 395 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T18:21:24.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:24 vm00 bash[17468]: audit 2026-03-09T18:21:24.285120+0000 mon.b (mon.2) 13 : audit [INF] from='osd.4 [v2:192.168.123.108:6800/3738925586,v1:192.168.123.108:6801/3738925586]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T18:21:24.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:24 vm00 bash[17468]: audit 2026-03-09T18:21:24.491946+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:24.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:24 vm00 bash[17468]: audit 2026-03-09T18:21:24.497855+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:24.912 INFO:teuthology.orchestra.run.vm08.stdout:Created osd(s) 4 on host 'vm08' 2026-03-09T18:21:24.977 DEBUG:teuthology.orchestra.run.vm08:osd.4> sudo journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.4.service 2026-03-09T18:21:24.979 INFO:tasks.cephadm:Deploying osd.5 on vm08 with /dev/vdd... 2026-03-09T18:21:24.979 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- lvm zap /dev/vdd 2026-03-09T18:21:25.636 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-09T18:21:25.646 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph orch daemon add osd vm08:/dev/vdd 2026-03-09T18:21:25.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:25 vm00 bash[22468]: cluster 2026-03-09T18:21:24.387150+0000 mgr.y (mgr.14152) 90 : cluster [DBG] pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:21:25.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:25 vm00 bash[22468]: audit 2026-03-09T18:21:24.581393+0000 mon.a (mon.0) 398 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T18:21:25.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:25 vm00 bash[22468]: cluster 2026-03-09T18:21:24.581518+0000 mon.a (mon.0) 399 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T18:21:25.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:25 vm00 bash[22468]: audit 2026-03-09T18:21:24.581638+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:21:25.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:25 vm00 bash[22468]: audit 2026-03-09T18:21:24.584157+0000 mon.b (mon.2) 14 : audit [INF] from='osd.4 [v2:192.168.123.108:6800/3738925586,v1:192.168.123.108:6801/3738925586]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:21:25.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:25 vm00 bash[22468]: audit 2026-03-09T18:21:24.584262+0000 mon.a (mon.0) 401 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:21:25.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:25 vm00 bash[22468]: audit 2026-03-09T18:21:24.906610+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:25.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:25 vm00 bash[22468]: audit 2026-03-09T18:21:24.913806+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:21:25.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:25 vm00 bash[22468]: audit 2026-03-09T18:21:24.914486+0000 mon.a (mon.0) 404 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:25.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:25 vm00 bash[22468]: audit 2026-03-09T18:21:24.914863+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:21:25.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:25 vm00 bash[17468]: cluster 2026-03-09T18:21:24.387150+0000 mgr.y (mgr.14152) 90 : cluster [DBG] pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:21:25.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:25 vm00 bash[17468]: audit 2026-03-09T18:21:24.581393+0000 mon.a (mon.0) 398 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T18:21:25.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:25 vm00 bash[17468]: cluster 2026-03-09T18:21:24.581518+0000 mon.a (mon.0) 399 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T18:21:25.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:25 vm00 bash[17468]: audit 2026-03-09T18:21:24.581638+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:21:25.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:25 vm00 bash[17468]: audit 2026-03-09T18:21:24.584157+0000 mon.b (mon.2) 14 : audit [INF] from='osd.4 [v2:192.168.123.108:6800/3738925586,v1:192.168.123.108:6801/3738925586]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:21:25.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:25 vm00 bash[17468]: audit 2026-03-09T18:21:24.584262+0000 mon.a (mon.0) 401 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:21:25.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:25 vm00 bash[17468]: audit 2026-03-09T18:21:24.906610+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:25.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:25 vm00 bash[17468]: audit 2026-03-09T18:21:24.913806+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:21:25.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:25 vm00 bash[17468]: audit 2026-03-09T18:21:24.914486+0000 mon.a (mon.0) 404 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:25.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:25 vm00 bash[17468]: audit 2026-03-09T18:21:24.914863+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:21:25.895 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:21:25 vm08 bash[20830]: debug 2026-03-09T18:21:25.597+0000 7fd6dcf01700 -1 osd.4 0 waiting for initial osdmap 2026-03-09T18:21:25.895 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:21:25 vm08 bash[20830]: debug 2026-03-09T18:21:25.605+0000 7fd6d8099700 -1 osd.4 29 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:21:25.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:25 vm08 bash[17774]: cluster 2026-03-09T18:21:24.387150+0000 mgr.y (mgr.14152) 90 : cluster [DBG] pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:21:25.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:25 vm08 bash[17774]: audit 2026-03-09T18:21:24.581393+0000 mon.a (mon.0) 398 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T18:21:25.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:25 vm08 bash[17774]: cluster 2026-03-09T18:21:24.581518+0000 mon.a (mon.0) 399 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T18:21:25.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:25 vm08 bash[17774]: audit 2026-03-09T18:21:24.581638+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:21:25.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:25 vm08 bash[17774]: audit 2026-03-09T18:21:24.584157+0000 mon.b (mon.2) 14 : audit [INF] from='osd.4 [v2:192.168.123.108:6800/3738925586,v1:192.168.123.108:6801/3738925586]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:21:25.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:25 vm08 bash[17774]: audit 2026-03-09T18:21:24.584262+0000 mon.a (mon.0) 401 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:21:25.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:25 vm08 bash[17774]: audit 2026-03-09T18:21:24.906610+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:25.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:25 vm08 bash[17774]: audit 2026-03-09T18:21:24.913806+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:21:25.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:25 vm08 bash[17774]: audit 2026-03-09T18:21:24.914486+0000 mon.a (mon.0) 404 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:25.895 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:25 vm08 bash[17774]: audit 2026-03-09T18:21:24.914863+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:21:26.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:26 vm00 bash[22468]: audit 2026-03-09T18:21:25.584365+0000 mon.a (mon.0) 406 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-09T18:21:26.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:26 vm00 bash[22468]: cluster 2026-03-09T18:21:25.586368+0000 mon.a (mon.0) 407 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T18:21:26.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:26 vm00 bash[22468]: audit 2026-03-09T18:21:25.588234+0000 mon.a (mon.0) 408 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:21:26.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:26 vm00 bash[22468]: audit 2026-03-09T18:21:25.590181+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:21:26.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:26 vm00 bash[22468]: cluster 2026-03-09T18:21:26.081434+0000 mon.a (mon.0) 410 : cluster [INF] osd.4 [v2:192.168.123.108:6800/3738925586,v1:192.168.123.108:6801/3738925586] boot 2026-03-09T18:21:26.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:26 vm00 bash[22468]: cluster 2026-03-09T18:21:26.081491+0000 mon.a (mon.0) 411 : cluster [DBG] osdmap e30: 5 total, 5 up, 5 in 2026-03-09T18:21:26.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:26 vm00 bash[22468]: audit 2026-03-09T18:21:26.082080+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:21:26.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:26 vm00 bash[22468]: audit 2026-03-09T18:21:26.083308+0000 mon.a (mon.0) 413 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:21:26.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:26 vm00 bash[22468]: audit 2026-03-09T18:21:26.086710+0000 mon.a (mon.0) 414 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:21:26.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:26 vm00 bash[22468]: audit 2026-03-09T18:21:26.090305+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:26.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:26 vm00 bash[17468]: audit 2026-03-09T18:21:25.584365+0000 mon.a (mon.0) 406 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-09T18:21:26.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:26 vm00 bash[17468]: cluster 2026-03-09T18:21:25.586368+0000 mon.a (mon.0) 407 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T18:21:26.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:26 vm00 bash[17468]: audit 2026-03-09T18:21:25.588234+0000 mon.a (mon.0) 408 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:21:26.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:26 vm00 bash[17468]: audit 2026-03-09T18:21:25.590181+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:21:26.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:26 vm00 bash[17468]: cluster 2026-03-09T18:21:26.081434+0000 mon.a (mon.0) 410 : cluster [INF] osd.4 [v2:192.168.123.108:6800/3738925586,v1:192.168.123.108:6801/3738925586] boot 2026-03-09T18:21:26.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:26 vm00 bash[17468]: cluster 2026-03-09T18:21:26.081491+0000 mon.a (mon.0) 411 : cluster [DBG] osdmap e30: 5 total, 5 up, 5 in 2026-03-09T18:21:26.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:26 vm00 bash[17468]: audit 2026-03-09T18:21:26.082080+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:21:26.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:26 vm00 bash[17468]: audit 2026-03-09T18:21:26.083308+0000 mon.a (mon.0) 413 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:21:26.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:26 vm00 bash[17468]: audit 2026-03-09T18:21:26.086710+0000 mon.a (mon.0) 414 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:21:26.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:26 vm00 bash[17468]: audit 2026-03-09T18:21:26.090305+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:26.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:26 vm08 bash[17774]: audit 2026-03-09T18:21:25.584365+0000 mon.a (mon.0) 406 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-09T18:21:26.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:26 vm08 bash[17774]: cluster 2026-03-09T18:21:25.586368+0000 mon.a (mon.0) 407 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T18:21:26.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:26 vm08 bash[17774]: audit 2026-03-09T18:21:25.588234+0000 mon.a (mon.0) 408 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:21:26.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:26 vm08 bash[17774]: audit 2026-03-09T18:21:25.590181+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:21:26.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:26 vm08 bash[17774]: cluster 2026-03-09T18:21:26.081434+0000 mon.a (mon.0) 410 : cluster [INF] osd.4 [v2:192.168.123.108:6800/3738925586,v1:192.168.123.108:6801/3738925586] boot 2026-03-09T18:21:26.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:26 vm08 bash[17774]: cluster 2026-03-09T18:21:26.081491+0000 mon.a (mon.0) 411 : cluster [DBG] osdmap e30: 5 total, 5 up, 5 in 2026-03-09T18:21:26.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:26 vm08 bash[17774]: audit 2026-03-09T18:21:26.082080+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:21:26.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:26 vm08 bash[17774]: audit 2026-03-09T18:21:26.083308+0000 mon.a (mon.0) 413 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:21:26.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:26 vm08 bash[17774]: audit 2026-03-09T18:21:26.086710+0000 mon.a (mon.0) 414 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:21:26.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:26 vm08 bash[17774]: audit 2026-03-09T18:21:26.090305+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:27.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:27 vm00 bash[22468]: cluster 2026-03-09T18:21:25.246338+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:21:27.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:27 vm00 bash[22468]: cluster 2026-03-09T18:21:25.246443+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:21:27.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:27 vm00 bash[22468]: audit 2026-03-09T18:21:26.080983+0000 mgr.y (mgr.14152) 91 : audit [DBG] from='client.24227 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:21:27.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:27 vm00 bash[22468]: cluster 2026-03-09T18:21:26.387519+0000 mgr.y (mgr.14152) 92 : cluster [DBG] pgmap v73: 1 pgs: 1 remapped+peering; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:21:27.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:27 vm00 bash[22468]: cluster 2026-03-09T18:21:27.082389+0000 mon.a (mon.0) 416 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-09T18:21:27.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:27 vm00 bash[17468]: cluster 2026-03-09T18:21:25.246338+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:21:27.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:27 vm00 bash[17468]: cluster 2026-03-09T18:21:25.246443+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:21:27.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:27 vm00 bash[17468]: audit 2026-03-09T18:21:26.080983+0000 mgr.y (mgr.14152) 91 : audit [DBG] from='client.24227 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:21:27.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:27 vm00 bash[17468]: cluster 2026-03-09T18:21:26.387519+0000 mgr.y (mgr.14152) 92 : cluster [DBG] pgmap v73: 1 pgs: 1 remapped+peering; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:21:27.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:27 vm00 bash[17468]: cluster 2026-03-09T18:21:27.082389+0000 mon.a (mon.0) 416 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-09T18:21:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:27 vm08 bash[17774]: cluster 2026-03-09T18:21:25.246338+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:21:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:27 vm08 bash[17774]: cluster 2026-03-09T18:21:25.246443+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:21:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:27 vm08 bash[17774]: audit 2026-03-09T18:21:26.080983+0000 mgr.y (mgr.14152) 91 : audit [DBG] from='client.24227 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:21:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:27 vm08 bash[17774]: cluster 2026-03-09T18:21:26.387519+0000 mgr.y (mgr.14152) 92 : cluster [DBG] pgmap v73: 1 pgs: 1 remapped+peering; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T18:21:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:27 vm08 bash[17774]: cluster 2026-03-09T18:21:27.082389+0000 mon.a (mon.0) 416 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-09T18:21:29.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:29 vm00 bash[22468]: cluster 2026-03-09T18:21:28.088026+0000 mon.a (mon.0) 417 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T18:21:29.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:29 vm00 bash[22468]: cluster 2026-03-09T18:21:28.387886+0000 mgr.y (mgr.14152) 93 : cluster [DBG] pgmap v76: 1 pgs: 1 remapped+peering; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail 2026-03-09T18:21:29.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:29 vm00 bash[17468]: cluster 2026-03-09T18:21:28.088026+0000 mon.a (mon.0) 417 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T18:21:29.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:29 vm00 bash[17468]: cluster 2026-03-09T18:21:28.387886+0000 mgr.y (mgr.14152) 93 : cluster [DBG] pgmap v76: 1 pgs: 1 remapped+peering; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail 2026-03-09T18:21:29.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:29 vm08 bash[17774]: cluster 2026-03-09T18:21:28.088026+0000 mon.a (mon.0) 417 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T18:21:29.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:29 vm08 bash[17774]: cluster 2026-03-09T18:21:28.387886+0000 mgr.y (mgr.14152) 93 : cluster [DBG] pgmap v76: 1 pgs: 1 remapped+peering; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail 2026-03-09T18:21:30.587 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:30 vm08 bash[17774]: cephadm 2026-03-09T18:21:29.224324+0000 mgr.y (mgr.14152) 94 : cephadm [INF] Detected new or changed devices on vm08 2026-03-09T18:21:30.587 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:30 vm08 bash[17774]: audit 2026-03-09T18:21:29.229856+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:30.587 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:30 vm08 bash[17774]: audit 2026-03-09T18:21:29.230458+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:21:30.587 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:30 vm08 bash[17774]: cephadm 2026-03-09T18:21:29.230808+0000 mgr.y (mgr.14152) 95 : cephadm [INF] Adjusting osd_memory_target on vm08 to 455.7M 2026-03-09T18:21:30.587 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:30 vm08 bash[17774]: cephadm 2026-03-09T18:21:29.231299+0000 mgr.y (mgr.14152) 96 : cephadm [WRN] Unable to set osd_memory_target on vm08 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-09T18:21:30.587 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:30 vm08 bash[17774]: audit 2026-03-09T18:21:29.233973+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:30.587 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:30 vm08 bash[17774]: audit 2026-03-09T18:21:30.207467+0000 mon.a (mon.0) 421 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c8fd35d5-49cd-4d8e-981a-afb708e47c9d"}]: dispatch 2026-03-09T18:21:30.587 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:30 vm08 bash[17774]: audit 2026-03-09T18:21:30.208636+0000 mon.b (mon.2) 15 : audit [INF] from='client.? 192.168.123.108:0/2742613143' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c8fd35d5-49cd-4d8e-981a-afb708e47c9d"}]: dispatch 2026-03-09T18:21:30.587 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:30 vm08 bash[17774]: audit 2026-03-09T18:21:30.215412+0000 mon.a (mon.0) 422 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c8fd35d5-49cd-4d8e-981a-afb708e47c9d"}]': finished 2026-03-09T18:21:30.587 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:30 vm08 bash[17774]: cluster 2026-03-09T18:21:30.215509+0000 mon.a (mon.0) 423 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-09T18:21:30.587 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:30 vm08 bash[17774]: audit 2026-03-09T18:21:30.215699+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:21:30.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:30 vm00 bash[22468]: cephadm 2026-03-09T18:21:29.224324+0000 mgr.y (mgr.14152) 94 : cephadm [INF] Detected new or changed devices on vm08 2026-03-09T18:21:30.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:30 vm00 bash[22468]: audit 2026-03-09T18:21:29.229856+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:30.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:30 vm00 bash[22468]: audit 2026-03-09T18:21:29.230458+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:21:30.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:30 vm00 bash[22468]: cephadm 2026-03-09T18:21:29.230808+0000 mgr.y (mgr.14152) 95 : cephadm [INF] Adjusting osd_memory_target on vm08 to 455.7M 2026-03-09T18:21:30.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:30 vm00 bash[22468]: cephadm 2026-03-09T18:21:29.231299+0000 mgr.y (mgr.14152) 96 : cephadm [WRN] Unable to set osd_memory_target on vm08 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-09T18:21:30.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:30 vm00 bash[22468]: audit 2026-03-09T18:21:29.233973+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:30.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:30 vm00 bash[22468]: audit 2026-03-09T18:21:30.207467+0000 mon.a (mon.0) 421 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c8fd35d5-49cd-4d8e-981a-afb708e47c9d"}]: dispatch 2026-03-09T18:21:30.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:30 vm00 bash[22468]: audit 2026-03-09T18:21:30.208636+0000 mon.b (mon.2) 15 : audit [INF] from='client.? 192.168.123.108:0/2742613143' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c8fd35d5-49cd-4d8e-981a-afb708e47c9d"}]: dispatch 2026-03-09T18:21:30.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:30 vm00 bash[22468]: audit 2026-03-09T18:21:30.215412+0000 mon.a (mon.0) 422 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c8fd35d5-49cd-4d8e-981a-afb708e47c9d"}]': finished 2026-03-09T18:21:30.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:30 vm00 bash[22468]: cluster 2026-03-09T18:21:30.215509+0000 mon.a (mon.0) 423 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-09T18:21:30.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:30 vm00 bash[22468]: audit 2026-03-09T18:21:30.215699+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:21:30.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:30 vm00 bash[17468]: cephadm 2026-03-09T18:21:29.224324+0000 mgr.y (mgr.14152) 94 : cephadm [INF] Detected new or changed devices on vm08 2026-03-09T18:21:30.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:30 vm00 bash[17468]: audit 2026-03-09T18:21:29.229856+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:30.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:30 vm00 bash[17468]: audit 2026-03-09T18:21:29.230458+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:21:30.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:30 vm00 bash[17468]: cephadm 2026-03-09T18:21:29.230808+0000 mgr.y (mgr.14152) 95 : cephadm [INF] Adjusting osd_memory_target on vm08 to 455.7M 2026-03-09T18:21:30.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:30 vm00 bash[17468]: cephadm 2026-03-09T18:21:29.231299+0000 mgr.y (mgr.14152) 96 : cephadm [WRN] Unable to set osd_memory_target on vm08 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-09T18:21:30.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:30 vm00 bash[17468]: audit 2026-03-09T18:21:29.233973+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:30.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:30 vm00 bash[17468]: audit 2026-03-09T18:21:30.207467+0000 mon.a (mon.0) 421 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c8fd35d5-49cd-4d8e-981a-afb708e47c9d"}]: dispatch 2026-03-09T18:21:30.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:30 vm00 bash[17468]: audit 2026-03-09T18:21:30.208636+0000 mon.b (mon.2) 15 : audit [INF] from='client.? 192.168.123.108:0/2742613143' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c8fd35d5-49cd-4d8e-981a-afb708e47c9d"}]: dispatch 2026-03-09T18:21:30.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:30 vm00 bash[17468]: audit 2026-03-09T18:21:30.215412+0000 mon.a (mon.0) 422 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c8fd35d5-49cd-4d8e-981a-afb708e47c9d"}]': finished 2026-03-09T18:21:30.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:30 vm00 bash[17468]: cluster 2026-03-09T18:21:30.215509+0000 mon.a (mon.0) 423 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-09T18:21:30.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:30 vm00 bash[17468]: audit 2026-03-09T18:21:30.215699+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:21:31.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:31 vm00 bash[22468]: cluster 2026-03-09T18:21:30.388103+0000 mgr.y (mgr.14152) 97 : cluster [DBG] pgmap v78: 1 pgs: 1 remapped+peering; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail 2026-03-09T18:21:31.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:31 vm00 bash[22468]: audit 2026-03-09T18:21:30.859541+0000 mon.b (mon.2) 16 : audit [DBG] from='client.? 192.168.123.108:0/2759605766' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:21:31.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:31 vm00 bash[17468]: cluster 2026-03-09T18:21:30.388103+0000 mgr.y (mgr.14152) 97 : cluster [DBG] pgmap v78: 1 pgs: 1 remapped+peering; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail 2026-03-09T18:21:31.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:31 vm00 bash[17468]: audit 2026-03-09T18:21:30.859541+0000 mon.b (mon.2) 16 : audit [DBG] from='client.? 192.168.123.108:0/2759605766' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:21:31.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:31 vm08 bash[17774]: cluster 2026-03-09T18:21:30.388103+0000 mgr.y (mgr.14152) 97 : cluster [DBG] pgmap v78: 1 pgs: 1 remapped+peering; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail 2026-03-09T18:21:31.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:31 vm08 bash[17774]: audit 2026-03-09T18:21:30.859541+0000 mon.b (mon.2) 16 : audit [DBG] from='client.? 192.168.123.108:0/2759605766' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:21:33.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:33 vm08 bash[17774]: cluster 2026-03-09T18:21:32.388328+0000 mgr.y (mgr.14152) 98 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 65 KiB/s, 0 objects/s recovering 2026-03-09T18:21:33.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:33 vm00 bash[17468]: cluster 2026-03-09T18:21:32.388328+0000 mgr.y (mgr.14152) 98 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 65 KiB/s, 0 objects/s recovering 2026-03-09T18:21:33.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:33 vm00 bash[22468]: cluster 2026-03-09T18:21:32.388328+0000 mgr.y (mgr.14152) 98 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 65 KiB/s, 0 objects/s recovering 2026-03-09T18:21:35.866 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:35 vm08 bash[17774]: cluster 2026-03-09T18:21:34.388593+0000 mgr.y (mgr.14152) 99 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 53 KiB/s, 0 objects/s recovering 2026-03-09T18:21:35.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:35 vm00 bash[22468]: cluster 2026-03-09T18:21:34.388593+0000 mgr.y (mgr.14152) 99 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 53 KiB/s, 0 objects/s recovering 2026-03-09T18:21:35.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:35 vm00 bash[17468]: cluster 2026-03-09T18:21:34.388593+0000 mgr.y (mgr.14152) 99 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 53 KiB/s, 0 objects/s recovering 2026-03-09T18:21:37.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:37 vm08 bash[17774]: cluster 2026-03-09T18:21:36.388941+0000 mgr.y (mgr.14152) 100 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-09T18:21:37.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:37 vm08 bash[17774]: audit 2026-03-09T18:21:36.398423+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T18:21:37.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:37 vm08 bash[17774]: audit 2026-03-09T18:21:36.398946+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:37.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:37 vm08 bash[17774]: cephadm 2026-03-09T18:21:36.399395+0000 mgr.y (mgr.14152) 101 : cephadm [INF] Deploying daemon osd.5 on vm08 2026-03-09T18:21:37.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:37 vm00 bash[22468]: cluster 2026-03-09T18:21:36.388941+0000 mgr.y (mgr.14152) 100 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-09T18:21:37.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:37 vm00 bash[22468]: audit 2026-03-09T18:21:36.398423+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T18:21:37.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:37 vm00 bash[22468]: audit 2026-03-09T18:21:36.398946+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:37.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:37 vm00 bash[22468]: cephadm 2026-03-09T18:21:36.399395+0000 mgr.y (mgr.14152) 101 : cephadm [INF] Deploying daemon osd.5 on vm08 2026-03-09T18:21:37.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:37 vm00 bash[17468]: cluster 2026-03-09T18:21:36.388941+0000 mgr.y (mgr.14152) 100 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-09T18:21:37.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:37 vm00 bash[17468]: audit 2026-03-09T18:21:36.398423+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T18:21:37.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:37 vm00 bash[17468]: audit 2026-03-09T18:21:36.398946+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:37.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:37 vm00 bash[17468]: cephadm 2026-03-09T18:21:36.399395+0000 mgr.y (mgr.14152) 101 : cephadm [INF] Deploying daemon osd.5 on vm08 2026-03-09T18:21:37.632 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:37 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:37.632 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:21:37 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:37.632 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:21:37 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:37.904 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:37 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:37.904 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:21:37 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:37.904 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:21:37 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:38 vm08 bash[17774]: audit 2026-03-09T18:21:37.763382+0000 mon.a (mon.0) 427 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:21:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:38 vm08 bash[17774]: audit 2026-03-09T18:21:37.764352+0000 mon.a (mon.0) 428 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:38 vm08 bash[17774]: audit 2026-03-09T18:21:37.767135+0000 mon.a (mon.0) 429 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:38 vm08 bash[17774]: audit 2026-03-09T18:21:37.767255+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:21:38.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:38 vm00 bash[17468]: audit 2026-03-09T18:21:37.763382+0000 mon.a (mon.0) 427 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:21:38.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:38 vm00 bash[17468]: audit 2026-03-09T18:21:37.764352+0000 mon.a (mon.0) 428 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:38.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:38 vm00 bash[17468]: audit 2026-03-09T18:21:37.767135+0000 mon.a (mon.0) 429 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:38.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:38 vm00 bash[17468]: audit 2026-03-09T18:21:37.767255+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:21:38.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:38 vm00 bash[22468]: audit 2026-03-09T18:21:37.763382+0000 mon.a (mon.0) 427 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:21:38.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:38 vm00 bash[22468]: audit 2026-03-09T18:21:37.764352+0000 mon.a (mon.0) 428 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:38.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:38 vm00 bash[22468]: audit 2026-03-09T18:21:37.767135+0000 mon.a (mon.0) 429 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:38.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:38 vm00 bash[22468]: audit 2026-03-09T18:21:37.767255+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:21:39.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:39 vm00 bash[17468]: cluster 2026-03-09T18:21:38.389330+0000 mgr.y (mgr.14152) 102 : cluster [DBG] pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 39 KiB/s, 0 objects/s recovering 2026-03-09T18:21:39.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:39 vm00 bash[22468]: cluster 2026-03-09T18:21:38.389330+0000 mgr.y (mgr.14152) 102 : cluster [DBG] pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 39 KiB/s, 0 objects/s recovering 2026-03-09T18:21:39.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:39 vm08 bash[17774]: cluster 2026-03-09T18:21:38.389330+0000 mgr.y (mgr.14152) 102 : cluster [DBG] pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 39 KiB/s, 0 objects/s recovering 2026-03-09T18:21:40.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:40 vm08 bash[17774]: audit 2026-03-09T18:21:40.439452+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:21:40.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:40 vm08 bash[17774]: audit 2026-03-09T18:21:40.447286+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:21:40.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:40 vm00 bash[17468]: audit 2026-03-09T18:21:40.439452+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:21:40.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:40 vm00 bash[17468]: audit 2026-03-09T18:21:40.447286+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:21:40.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:40 vm00 bash[22468]: audit 2026-03-09T18:21:40.439452+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:21:40.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:40 vm00 bash[22468]: audit 2026-03-09T18:21:40.447286+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:21:41.095 INFO:teuthology.orchestra.run.vm08.stdout:Created osd(s) 5 on host 'vm08' 2026-03-09T18:21:41.177 DEBUG:teuthology.orchestra.run.vm08:osd.5> sudo journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.5.service 2026-03-09T18:21:41.177 INFO:tasks.cephadm:Deploying osd.6 on vm08 with /dev/vdc... 2026-03-09T18:21:41.177 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- lvm zap /dev/vdc 2026-03-09T18:21:41.835 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-09T18:21:41.846 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph orch daemon add osd vm08:/dev/vdc 2026-03-09T18:21:41.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:41 vm08 bash[17774]: cluster 2026-03-09T18:21:40.389563+0000 mgr.y (mgr.14152) 103 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 38 KiB/s, 0 objects/s recovering 2026-03-09T18:21:41.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:41 vm08 bash[17774]: audit 2026-03-09T18:21:40.668611+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:41.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:41 vm08 bash[17774]: audit 2026-03-09T18:21:40.673999+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:41.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:41 vm08 bash[17774]: audit 2026-03-09T18:21:41.024514+0000 mon.a (mon.0) 435 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T18:21:41.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:41 vm08 bash[17774]: audit 2026-03-09T18:21:41.025925+0000 mon.b (mon.2) 17 : audit [INF] from='osd.5 [v2:192.168.123.108:6808/3115835875,v1:192.168.123.108:6809/3115835875]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T18:21:41.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:41 vm08 bash[17774]: audit 2026-03-09T18:21:41.089577+0000 mon.a (mon.0) 436 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:41.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:41 vm08 bash[17774]: audit 2026-03-09T18:21:41.089761+0000 mon.a (mon.0) 437 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:21:41.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:41 vm08 bash[17774]: audit 2026-03-09T18:21:41.090696+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:41.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:41 vm08 bash[17774]: audit 2026-03-09T18:21:41.091324+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:21:42.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:41 vm00 bash[22468]: cluster 2026-03-09T18:21:40.389563+0000 mgr.y (mgr.14152) 103 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 38 KiB/s, 0 objects/s recovering 2026-03-09T18:21:42.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:41 vm00 bash[22468]: audit 2026-03-09T18:21:40.668611+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:42.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:41 vm00 bash[22468]: audit 2026-03-09T18:21:40.673999+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:42.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:41 vm00 bash[22468]: audit 2026-03-09T18:21:41.024514+0000 mon.a (mon.0) 435 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T18:21:42.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:41 vm00 bash[22468]: audit 2026-03-09T18:21:41.025925+0000 mon.b (mon.2) 17 : audit [INF] from='osd.5 [v2:192.168.123.108:6808/3115835875,v1:192.168.123.108:6809/3115835875]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T18:21:42.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:41 vm00 bash[22468]: audit 2026-03-09T18:21:41.089577+0000 mon.a (mon.0) 436 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:42.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:41 vm00 bash[22468]: audit 2026-03-09T18:21:41.089761+0000 mon.a (mon.0) 437 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:21:42.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:41 vm00 bash[22468]: audit 2026-03-09T18:21:41.090696+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:42.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:41 vm00 bash[22468]: audit 2026-03-09T18:21:41.091324+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:21:42.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:41 vm00 bash[17468]: cluster 2026-03-09T18:21:40.389563+0000 mgr.y (mgr.14152) 103 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 38 KiB/s, 0 objects/s recovering 2026-03-09T18:21:42.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:41 vm00 bash[17468]: audit 2026-03-09T18:21:40.668611+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:42.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:41 vm00 bash[17468]: audit 2026-03-09T18:21:40.673999+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:42.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:41 vm00 bash[17468]: audit 2026-03-09T18:21:41.024514+0000 mon.a (mon.0) 435 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T18:21:42.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:41 vm00 bash[17468]: audit 2026-03-09T18:21:41.025925+0000 mon.b (mon.2) 17 : audit [INF] from='osd.5 [v2:192.168.123.108:6808/3115835875,v1:192.168.123.108:6809/3115835875]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T18:21:42.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:41 vm00 bash[17468]: audit 2026-03-09T18:21:41.089577+0000 mon.a (mon.0) 436 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:42.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:41 vm00 bash[17468]: audit 2026-03-09T18:21:41.089761+0000 mon.a (mon.0) 437 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:21:42.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:41 vm00 bash[17468]: audit 2026-03-09T18:21:41.090696+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:42.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:41 vm00 bash[17468]: audit 2026-03-09T18:21:41.091324+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:21:42.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:42 vm08 bash[17774]: audit 2026-03-09T18:21:41.686633+0000 mon.a (mon.0) 440 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T18:21:42.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:42 vm08 bash[17774]: cluster 2026-03-09T18:21:41.686728+0000 mon.a (mon.0) 441 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-09T18:21:42.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:42 vm08 bash[17774]: audit 2026-03-09T18:21:41.686882+0000 mon.a (mon.0) 442 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:21:42.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:42 vm08 bash[17774]: audit 2026-03-09T18:21:41.688974+0000 mon.a (mon.0) 443 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:21:42.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:42 vm08 bash[17774]: audit 2026-03-09T18:21:41.690472+0000 mon.b (mon.2) 18 : audit [INF] from='osd.5 [v2:192.168.123.108:6808/3115835875,v1:192.168.123.108:6809/3115835875]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:21:42.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:42 vm08 bash[17774]: audit 2026-03-09T18:21:42.267425+0000 mon.a (mon.0) 444 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:21:42.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:42 vm08 bash[17774]: audit 2026-03-09T18:21:42.268871+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:21:42.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:42 vm08 bash[17774]: audit 2026-03-09T18:21:42.269410+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:42.975 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:21:42 vm08 bash[23954]: debug 2026-03-09T18:21:42.689+0000 7f0ad61d4700 -1 osd.5 0 waiting for initial osdmap 2026-03-09T18:21:42.975 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:21:42 vm08 bash[23954]: debug 2026-03-09T18:21:42.697+0000 7f0acfb69700 -1 osd.5 35 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:21:43.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:42 vm00 bash[22468]: audit 2026-03-09T18:21:41.686633+0000 mon.a (mon.0) 440 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T18:21:43.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:42 vm00 bash[22468]: cluster 2026-03-09T18:21:41.686728+0000 mon.a (mon.0) 441 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-09T18:21:43.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:42 vm00 bash[22468]: audit 2026-03-09T18:21:41.686882+0000 mon.a (mon.0) 442 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:21:43.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:42 vm00 bash[22468]: audit 2026-03-09T18:21:41.688974+0000 mon.a (mon.0) 443 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:21:43.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:42 vm00 bash[22468]: audit 2026-03-09T18:21:41.690472+0000 mon.b (mon.2) 18 : audit [INF] from='osd.5 [v2:192.168.123.108:6808/3115835875,v1:192.168.123.108:6809/3115835875]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:21:43.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:42 vm00 bash[22468]: audit 2026-03-09T18:21:42.267425+0000 mon.a (mon.0) 444 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:21:43.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:42 vm00 bash[22468]: audit 2026-03-09T18:21:42.268871+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:21:43.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:42 vm00 bash[22468]: audit 2026-03-09T18:21:42.269410+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:43.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:42 vm00 bash[17468]: audit 2026-03-09T18:21:41.686633+0000 mon.a (mon.0) 440 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T18:21:43.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:42 vm00 bash[17468]: cluster 2026-03-09T18:21:41.686728+0000 mon.a (mon.0) 441 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-09T18:21:43.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:42 vm00 bash[17468]: audit 2026-03-09T18:21:41.686882+0000 mon.a (mon.0) 442 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:21:43.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:42 vm00 bash[17468]: audit 2026-03-09T18:21:41.688974+0000 mon.a (mon.0) 443 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:21:43.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:42 vm00 bash[17468]: audit 2026-03-09T18:21:41.690472+0000 mon.b (mon.2) 18 : audit [INF] from='osd.5 [v2:192.168.123.108:6808/3115835875,v1:192.168.123.108:6809/3115835875]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:21:43.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:42 vm00 bash[17468]: audit 2026-03-09T18:21:42.267425+0000 mon.a (mon.0) 444 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:21:43.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:42 vm00 bash[17468]: audit 2026-03-09T18:21:42.268871+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:21:43.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:42 vm00 bash[17468]: audit 2026-03-09T18:21:42.269410+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:43.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:43 vm08 bash[17774]: audit 2026-03-09T18:21:42.265740+0000 mgr.y (mgr.14152) 104 : audit [DBG] from='client.24254 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:21:43.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:43 vm08 bash[17774]: cluster 2026-03-09T18:21:42.389978+0000 mgr.y (mgr.14152) 105 : cluster [DBG] pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail 2026-03-09T18:21:43.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:43 vm08 bash[17774]: audit 2026-03-09T18:21:42.690076+0000 mon.a (mon.0) 447 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-09T18:21:43.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:43 vm08 bash[17774]: cluster 2026-03-09T18:21:42.690207+0000 mon.a (mon.0) 448 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T18:21:43.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:43 vm08 bash[17774]: audit 2026-03-09T18:21:42.692712+0000 mon.a (mon.0) 449 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:21:43.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:43 vm08 bash[17774]: audit 2026-03-09T18:21:42.693571+0000 mon.a (mon.0) 450 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:21:44.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:43 vm00 bash[17468]: audit 2026-03-09T18:21:42.265740+0000 mgr.y (mgr.14152) 104 : audit [DBG] from='client.24254 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:21:44.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:43 vm00 bash[17468]: cluster 2026-03-09T18:21:42.389978+0000 mgr.y (mgr.14152) 105 : cluster [DBG] pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail 2026-03-09T18:21:44.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:43 vm00 bash[17468]: audit 2026-03-09T18:21:42.690076+0000 mon.a (mon.0) 447 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-09T18:21:44.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:43 vm00 bash[17468]: cluster 2026-03-09T18:21:42.690207+0000 mon.a (mon.0) 448 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T18:21:44.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:43 vm00 bash[17468]: audit 2026-03-09T18:21:42.692712+0000 mon.a (mon.0) 449 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:21:44.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:43 vm00 bash[17468]: audit 2026-03-09T18:21:42.693571+0000 mon.a (mon.0) 450 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:21:44.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:43 vm00 bash[22468]: audit 2026-03-09T18:21:42.265740+0000 mgr.y (mgr.14152) 104 : audit [DBG] from='client.24254 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:21:44.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:43 vm00 bash[22468]: cluster 2026-03-09T18:21:42.389978+0000 mgr.y (mgr.14152) 105 : cluster [DBG] pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail 2026-03-09T18:21:44.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:43 vm00 bash[22468]: audit 2026-03-09T18:21:42.690076+0000 mon.a (mon.0) 447 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-09T18:21:44.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:43 vm00 bash[22468]: cluster 2026-03-09T18:21:42.690207+0000 mon.a (mon.0) 448 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T18:21:44.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:43 vm00 bash[22468]: audit 2026-03-09T18:21:42.692712+0000 mon.a (mon.0) 449 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:21:44.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:43 vm00 bash[22468]: audit 2026-03-09T18:21:42.693571+0000 mon.a (mon.0) 450 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:21:44.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:44 vm08 bash[17774]: cluster 2026-03-09T18:21:42.004631+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:21:44.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:44 vm08 bash[17774]: cluster 2026-03-09T18:21:42.004723+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:21:44.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:44 vm08 bash[17774]: cluster 2026-03-09T18:21:43.692830+0000 mon.a (mon.0) 451 : cluster [INF] osd.5 [v2:192.168.123.108:6808/3115835875,v1:192.168.123.108:6809/3115835875] boot 2026-03-09T18:21:44.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:44 vm08 bash[17774]: cluster 2026-03-09T18:21:43.692927+0000 mon.a (mon.0) 452 : cluster [DBG] osdmap e36: 6 total, 6 up, 6 in 2026-03-09T18:21:44.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:44 vm08 bash[17774]: audit 2026-03-09T18:21:43.694268+0000 mon.a (mon.0) 453 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:21:45.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:44 vm00 bash[22468]: cluster 2026-03-09T18:21:42.004631+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:21:45.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:44 vm00 bash[22468]: cluster 2026-03-09T18:21:42.004723+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:21:45.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:44 vm00 bash[22468]: cluster 2026-03-09T18:21:43.692830+0000 mon.a (mon.0) 451 : cluster [INF] osd.5 [v2:192.168.123.108:6808/3115835875,v1:192.168.123.108:6809/3115835875] boot 2026-03-09T18:21:45.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:44 vm00 bash[22468]: cluster 2026-03-09T18:21:43.692927+0000 mon.a (mon.0) 452 : cluster [DBG] osdmap e36: 6 total, 6 up, 6 in 2026-03-09T18:21:45.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:44 vm00 bash[22468]: audit 2026-03-09T18:21:43.694268+0000 mon.a (mon.0) 453 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:21:45.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:44 vm00 bash[17468]: cluster 2026-03-09T18:21:42.004631+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:21:45.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:44 vm00 bash[17468]: cluster 2026-03-09T18:21:42.004723+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:21:45.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:44 vm00 bash[17468]: cluster 2026-03-09T18:21:43.692830+0000 mon.a (mon.0) 451 : cluster [INF] osd.5 [v2:192.168.123.108:6808/3115835875,v1:192.168.123.108:6809/3115835875] boot 2026-03-09T18:21:45.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:44 vm00 bash[17468]: cluster 2026-03-09T18:21:43.692927+0000 mon.a (mon.0) 452 : cluster [DBG] osdmap e36: 6 total, 6 up, 6 in 2026-03-09T18:21:45.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:44 vm00 bash[17468]: audit 2026-03-09T18:21:43.694268+0000 mon.a (mon.0) 453 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:21:45.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:45 vm08 bash[17774]: cluster 2026-03-09T18:21:44.390274+0000 mgr.y (mgr.14152) 106 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:21:45.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:45 vm08 bash[17774]: cluster 2026-03-09T18:21:44.706078+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-09T18:21:45.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:45 vm08 bash[17774]: audit 2026-03-09T18:21:45.436690+0000 mon.a (mon.0) 455 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:45.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:45 vm08 bash[17774]: audit 2026-03-09T18:21:45.437569+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:21:45.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:45 vm08 bash[17774]: audit 2026-03-09T18:21:45.438249+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:21:45.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:45 vm08 bash[17774]: audit 2026-03-09T18:21:45.443499+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:45.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:45 vm08 bash[17774]: cluster 2026-03-09T18:21:45.706273+0000 mon.a (mon.0) 459 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-09T18:21:46.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:45 vm00 bash[22468]: cluster 2026-03-09T18:21:44.390274+0000 mgr.y (mgr.14152) 106 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:21:46.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:45 vm00 bash[22468]: cluster 2026-03-09T18:21:44.706078+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-09T18:21:46.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:45 vm00 bash[22468]: audit 2026-03-09T18:21:45.436690+0000 mon.a (mon.0) 455 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:46.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:45 vm00 bash[22468]: audit 2026-03-09T18:21:45.437569+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:21:46.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:45 vm00 bash[22468]: audit 2026-03-09T18:21:45.438249+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:21:46.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:45 vm00 bash[22468]: audit 2026-03-09T18:21:45.443499+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:46.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:45 vm00 bash[22468]: cluster 2026-03-09T18:21:45.706273+0000 mon.a (mon.0) 459 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-09T18:21:46.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:45 vm00 bash[17468]: cluster 2026-03-09T18:21:44.390274+0000 mgr.y (mgr.14152) 106 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:21:46.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:45 vm00 bash[17468]: cluster 2026-03-09T18:21:44.706078+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-09T18:21:46.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:45 vm00 bash[17468]: audit 2026-03-09T18:21:45.436690+0000 mon.a (mon.0) 455 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:46.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:45 vm00 bash[17468]: audit 2026-03-09T18:21:45.437569+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:21:46.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:45 vm00 bash[17468]: audit 2026-03-09T18:21:45.438249+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:21:46.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:45 vm00 bash[17468]: audit 2026-03-09T18:21:45.443499+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:46.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:45 vm00 bash[17468]: cluster 2026-03-09T18:21:45.706273+0000 mon.a (mon.0) 459 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-09T18:21:46.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:46 vm08 bash[17774]: cephadm 2026-03-09T18:21:45.428497+0000 mgr.y (mgr.14152) 107 : cephadm [INF] Detected new or changed devices on vm08 2026-03-09T18:21:46.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:46 vm08 bash[17774]: cephadm 2026-03-09T18:21:45.438696+0000 mgr.y (mgr.14152) 108 : cephadm [INF] Adjusting osd_memory_target on vm08 to 227.8M 2026-03-09T18:21:46.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:46 vm08 bash[17774]: cephadm 2026-03-09T18:21:45.439204+0000 mgr.y (mgr.14152) 109 : cephadm [WRN] Unable to set osd_memory_target on vm08 to 238957977: error parsing value: Value '238957977' is below minimum 939524096 2026-03-09T18:21:46.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:46 vm08 bash[17774]: audit 2026-03-09T18:21:46.324347+0000 mon.a (mon.0) 460 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fdedf8fe-f1d9-48e7-9db9-df7cf33b1093"}]: dispatch 2026-03-09T18:21:46.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:46 vm08 bash[17774]: audit 2026-03-09T18:21:46.325662+0000 mon.b (mon.2) 19 : audit [INF] from='client.? 192.168.123.108:0/1954827176' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fdedf8fe-f1d9-48e7-9db9-df7cf33b1093"}]: dispatch 2026-03-09T18:21:46.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:46 vm08 bash[17774]: audit 2026-03-09T18:21:46.330034+0000 mon.a (mon.0) 461 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fdedf8fe-f1d9-48e7-9db9-df7cf33b1093"}]': finished 2026-03-09T18:21:46.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:46 vm08 bash[17774]: cluster 2026-03-09T18:21:46.330081+0000 mon.a (mon.0) 462 : cluster [DBG] osdmap e39: 7 total, 6 up, 7 in 2026-03-09T18:21:46.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:46 vm08 bash[17774]: audit 2026-03-09T18:21:46.330187+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:21:47.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:46 vm00 bash[22468]: cephadm 2026-03-09T18:21:45.428497+0000 mgr.y (mgr.14152) 107 : cephadm [INF] Detected new or changed devices on vm08 2026-03-09T18:21:47.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:46 vm00 bash[22468]: cephadm 2026-03-09T18:21:45.438696+0000 mgr.y (mgr.14152) 108 : cephadm [INF] Adjusting osd_memory_target on vm08 to 227.8M 2026-03-09T18:21:47.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:46 vm00 bash[22468]: cephadm 2026-03-09T18:21:45.439204+0000 mgr.y (mgr.14152) 109 : cephadm [WRN] Unable to set osd_memory_target on vm08 to 238957977: error parsing value: Value '238957977' is below minimum 939524096 2026-03-09T18:21:47.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:46 vm00 bash[22468]: audit 2026-03-09T18:21:46.324347+0000 mon.a (mon.0) 460 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fdedf8fe-f1d9-48e7-9db9-df7cf33b1093"}]: dispatch 2026-03-09T18:21:47.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:46 vm00 bash[22468]: audit 2026-03-09T18:21:46.325662+0000 mon.b (mon.2) 19 : audit [INF] from='client.? 192.168.123.108:0/1954827176' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fdedf8fe-f1d9-48e7-9db9-df7cf33b1093"}]: dispatch 2026-03-09T18:21:47.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:46 vm00 bash[22468]: audit 2026-03-09T18:21:46.330034+0000 mon.a (mon.0) 461 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fdedf8fe-f1d9-48e7-9db9-df7cf33b1093"}]': finished 2026-03-09T18:21:47.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:46 vm00 bash[22468]: cluster 2026-03-09T18:21:46.330081+0000 mon.a (mon.0) 462 : cluster [DBG] osdmap e39: 7 total, 6 up, 7 in 2026-03-09T18:21:47.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:46 vm00 bash[22468]: audit 2026-03-09T18:21:46.330187+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:21:47.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:46 vm00 bash[17468]: cephadm 2026-03-09T18:21:45.428497+0000 mgr.y (mgr.14152) 107 : cephadm [INF] Detected new or changed devices on vm08 2026-03-09T18:21:47.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:46 vm00 bash[17468]: cephadm 2026-03-09T18:21:45.438696+0000 mgr.y (mgr.14152) 108 : cephadm [INF] Adjusting osd_memory_target on vm08 to 227.8M 2026-03-09T18:21:47.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:46 vm00 bash[17468]: cephadm 2026-03-09T18:21:45.439204+0000 mgr.y (mgr.14152) 109 : cephadm [WRN] Unable to set osd_memory_target on vm08 to 238957977: error parsing value: Value '238957977' is below minimum 939524096 2026-03-09T18:21:47.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:46 vm00 bash[17468]: audit 2026-03-09T18:21:46.324347+0000 mon.a (mon.0) 460 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fdedf8fe-f1d9-48e7-9db9-df7cf33b1093"}]: dispatch 2026-03-09T18:21:47.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:46 vm00 bash[17468]: audit 2026-03-09T18:21:46.325662+0000 mon.b (mon.2) 19 : audit [INF] from='client.? 192.168.123.108:0/1954827176' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fdedf8fe-f1d9-48e7-9db9-df7cf33b1093"}]: dispatch 2026-03-09T18:21:47.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:46 vm00 bash[17468]: audit 2026-03-09T18:21:46.330034+0000 mon.a (mon.0) 461 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fdedf8fe-f1d9-48e7-9db9-df7cf33b1093"}]': finished 2026-03-09T18:21:47.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:46 vm00 bash[17468]: cluster 2026-03-09T18:21:46.330081+0000 mon.a (mon.0) 462 : cluster [DBG] osdmap e39: 7 total, 6 up, 7 in 2026-03-09T18:21:47.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:46 vm00 bash[17468]: audit 2026-03-09T18:21:46.330187+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:21:47.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:47 vm08 bash[17774]: cluster 2026-03-09T18:21:46.390498+0000 mgr.y (mgr.14152) 110 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:21:47.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:47 vm08 bash[17774]: audit 2026-03-09T18:21:46.943998+0000 mon.b (mon.2) 20 : audit [DBG] from='client.? 192.168.123.108:0/979708518' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:21:48.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:47 vm00 bash[17468]: cluster 2026-03-09T18:21:46.390498+0000 mgr.y (mgr.14152) 110 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:21:48.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:47 vm00 bash[17468]: audit 2026-03-09T18:21:46.943998+0000 mon.b (mon.2) 20 : audit [DBG] from='client.? 192.168.123.108:0/979708518' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:21:48.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:47 vm00 bash[22468]: cluster 2026-03-09T18:21:46.390498+0000 mgr.y (mgr.14152) 110 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:21:48.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:47 vm00 bash[22468]: audit 2026-03-09T18:21:46.943998+0000 mon.b (mon.2) 20 : audit [DBG] from='client.? 192.168.123.108:0/979708518' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:21:50.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:49 vm00 bash[17468]: cluster 2026-03-09T18:21:48.390732+0000 mgr.y (mgr.14152) 111 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:21:50.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:49 vm00 bash[22468]: cluster 2026-03-09T18:21:48.390732+0000 mgr.y (mgr.14152) 111 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:21:50.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:49 vm08 bash[17774]: cluster 2026-03-09T18:21:48.390732+0000 mgr.y (mgr.14152) 111 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:21:51.999 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:51 vm08 bash[17774]: cluster 2026-03-09T18:21:50.390964+0000 mgr.y (mgr.14152) 112 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:21:52.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:51 vm00 bash[17468]: cluster 2026-03-09T18:21:50.390964+0000 mgr.y (mgr.14152) 112 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:21:52.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:51 vm00 bash[22468]: cluster 2026-03-09T18:21:50.390964+0000 mgr.y (mgr.14152) 112 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:21:53.673 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:53.673 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:53.673 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:21:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:53.673 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:21:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:53.674 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:21:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:53.674 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:21:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:53.674 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:21:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:53.674 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:21:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:21:53.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:53 vm08 bash[17774]: cluster 2026-03-09T18:21:52.391194+0000 mgr.y (mgr.14152) 113 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:21:53.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:53 vm08 bash[17774]: audit 2026-03-09T18:21:52.781471+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T18:21:53.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:53 vm08 bash[17774]: audit 2026-03-09T18:21:52.782630+0000 mon.a (mon.0) 465 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:53.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:53 vm08 bash[17774]: cephadm 2026-03-09T18:21:52.783150+0000 mgr.y (mgr.14152) 114 : cephadm [INF] Deploying daemon osd.6 on vm08 2026-03-09T18:21:53.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:53 vm08 bash[17774]: audit 2026-03-09T18:21:53.697512+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:53.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:53 vm08 bash[17774]: audit 2026-03-09T18:21:53.723068+0000 mon.a (mon.0) 467 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:21:53.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:53 vm08 bash[17774]: audit 2026-03-09T18:21:53.723856+0000 mon.a (mon.0) 468 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:53.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:53 vm08 bash[17774]: audit 2026-03-09T18:21:53.724237+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:21:54.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:53 vm00 bash[22468]: cluster 2026-03-09T18:21:52.391194+0000 mgr.y (mgr.14152) 113 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:21:54.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:53 vm00 bash[22468]: audit 2026-03-09T18:21:52.781471+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T18:21:54.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:53 vm00 bash[22468]: audit 2026-03-09T18:21:52.782630+0000 mon.a (mon.0) 465 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:54.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:53 vm00 bash[22468]: cephadm 2026-03-09T18:21:52.783150+0000 mgr.y (mgr.14152) 114 : cephadm [INF] Deploying daemon osd.6 on vm08 2026-03-09T18:21:54.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:53 vm00 bash[22468]: audit 2026-03-09T18:21:53.697512+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:54.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:53 vm00 bash[22468]: audit 2026-03-09T18:21:53.723068+0000 mon.a (mon.0) 467 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:21:54.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:53 vm00 bash[22468]: audit 2026-03-09T18:21:53.723856+0000 mon.a (mon.0) 468 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:54.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:53 vm00 bash[22468]: audit 2026-03-09T18:21:53.724237+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:21:54.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:53 vm00 bash[17468]: cluster 2026-03-09T18:21:52.391194+0000 mgr.y (mgr.14152) 113 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:21:54.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:53 vm00 bash[17468]: audit 2026-03-09T18:21:52.781471+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T18:21:54.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:53 vm00 bash[17468]: audit 2026-03-09T18:21:52.782630+0000 mon.a (mon.0) 465 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:54.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:53 vm00 bash[17468]: cephadm 2026-03-09T18:21:52.783150+0000 mgr.y (mgr.14152) 114 : cephadm [INF] Deploying daemon osd.6 on vm08 2026-03-09T18:21:54.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:53 vm00 bash[17468]: audit 2026-03-09T18:21:53.697512+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:54.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:53 vm00 bash[17468]: audit 2026-03-09T18:21:53.723068+0000 mon.a (mon.0) 467 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:21:54.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:53 vm00 bash[17468]: audit 2026-03-09T18:21:53.723856+0000 mon.a (mon.0) 468 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:54.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:53 vm00 bash[17468]: audit 2026-03-09T18:21:53.724237+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:21:56.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:55 vm00 bash[22468]: cluster 2026-03-09T18:21:54.391454+0000 mgr.y (mgr.14152) 115 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:21:56.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:55 vm00 bash[17468]: cluster 2026-03-09T18:21:54.391454+0000 mgr.y (mgr.14152) 115 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:21:56.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:55 vm08 bash[17774]: cluster 2026-03-09T18:21:54.391454+0000 mgr.y (mgr.14152) 115 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:21:57.193 INFO:teuthology.orchestra.run.vm08.stdout:Created osd(s) 6 on host 'vm08' 2026-03-09T18:21:57.257 DEBUG:teuthology.orchestra.run.vm08:osd.6> sudo journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.6.service 2026-03-09T18:21:57.258 INFO:tasks.cephadm:Deploying osd.7 on vm08 with /dev/vdb... 2026-03-09T18:21:57.258 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- lvm zap /dev/vdb 2026-03-09T18:21:57.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:57 vm00 bash[22468]: cluster 2026-03-09T18:21:56.391688+0000 mgr.y (mgr.14152) 116 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:21:57.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:57 vm00 bash[22468]: audit 2026-03-09T18:21:56.468287+0000 mon.a (mon.0) 470 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T18:21:57.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:57 vm00 bash[22468]: audit 2026-03-09T18:21:56.469683+0000 mon.b (mon.2) 21 : audit [INF] from='osd.6 [v2:192.168.123.108:6816/3870182675,v1:192.168.123.108:6817/3870182675]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T18:21:57.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:57 vm00 bash[22468]: audit 2026-03-09T18:21:56.702686+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:57.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:57 vm00 bash[22468]: audit 2026-03-09T18:21:56.877532+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:57.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:57 vm00 bash[17468]: cluster 2026-03-09T18:21:56.391688+0000 mgr.y (mgr.14152) 116 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:21:57.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:57 vm00 bash[17468]: audit 2026-03-09T18:21:56.468287+0000 mon.a (mon.0) 470 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T18:21:57.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:57 vm00 bash[17468]: audit 2026-03-09T18:21:56.469683+0000 mon.b (mon.2) 21 : audit [INF] from='osd.6 [v2:192.168.123.108:6816/3870182675,v1:192.168.123.108:6817/3870182675]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T18:21:57.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:57 vm00 bash[17468]: audit 2026-03-09T18:21:56.702686+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:57.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:57 vm00 bash[17468]: audit 2026-03-09T18:21:56.877532+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:57.431 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:57 vm08 bash[17774]: cluster 2026-03-09T18:21:56.391688+0000 mgr.y (mgr.14152) 116 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:21:57.431 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:57 vm08 bash[17774]: audit 2026-03-09T18:21:56.468287+0000 mon.a (mon.0) 470 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T18:21:57.431 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:57 vm08 bash[17774]: audit 2026-03-09T18:21:56.469683+0000 mon.b (mon.2) 21 : audit [INF] from='osd.6 [v2:192.168.123.108:6816/3870182675,v1:192.168.123.108:6817/3870182675]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T18:21:57.432 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:57 vm08 bash[17774]: audit 2026-03-09T18:21:56.702686+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:57.432 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:57 vm08 bash[17774]: audit 2026-03-09T18:21:56.877532+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:58.015 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-09T18:21:58.029 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph orch daemon add osd vm08:/dev/vdb 2026-03-09T18:21:58.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:58 vm00 bash[22468]: audit 2026-03-09T18:21:57.093344+0000 mon.a (mon.0) 473 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T18:21:58.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:58 vm00 bash[22468]: cluster 2026-03-09T18:21:57.093443+0000 mon.a (mon.0) 474 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-09T18:21:58.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:58 vm00 bash[22468]: audit 2026-03-09T18:21:57.094291+0000 mon.a (mon.0) 475 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:21:58.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:58 vm00 bash[22468]: audit 2026-03-09T18:21:57.095649+0000 mon.a (mon.0) 476 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:21:58.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:58 vm00 bash[22468]: audit 2026-03-09T18:21:57.097091+0000 mon.b (mon.2) 22 : audit [INF] from='osd.6 [v2:192.168.123.108:6816/3870182675,v1:192.168.123.108:6817/3870182675]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:21:58.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:58 vm00 bash[22468]: audit 2026-03-09T18:21:57.187319+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:58.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:58 vm00 bash[22468]: audit 2026-03-09T18:21:57.194289+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:21:58.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:58 vm00 bash[22468]: audit 2026-03-09T18:21:57.195278+0000 mon.a (mon.0) 479 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:58.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:58 vm00 bash[22468]: audit 2026-03-09T18:21:57.195821+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:21:58.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:58 vm00 bash[17468]: audit 2026-03-09T18:21:57.093344+0000 mon.a (mon.0) 473 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T18:21:58.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:58 vm00 bash[17468]: cluster 2026-03-09T18:21:57.093443+0000 mon.a (mon.0) 474 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-09T18:21:58.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:58 vm00 bash[17468]: audit 2026-03-09T18:21:57.094291+0000 mon.a (mon.0) 475 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:21:58.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:58 vm00 bash[17468]: audit 2026-03-09T18:21:57.095649+0000 mon.a (mon.0) 476 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:21:58.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:58 vm00 bash[17468]: audit 2026-03-09T18:21:57.097091+0000 mon.b (mon.2) 22 : audit [INF] from='osd.6 [v2:192.168.123.108:6816/3870182675,v1:192.168.123.108:6817/3870182675]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:21:58.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:58 vm00 bash[17468]: audit 2026-03-09T18:21:57.187319+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:58.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:58 vm00 bash[17468]: audit 2026-03-09T18:21:57.194289+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:21:58.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:58 vm00 bash[17468]: audit 2026-03-09T18:21:57.195278+0000 mon.a (mon.0) 479 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:58.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:58 vm00 bash[17468]: audit 2026-03-09T18:21:57.195821+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:21:58.433 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:58 vm08 bash[17774]: audit 2026-03-09T18:21:57.093344+0000 mon.a (mon.0) 473 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T18:21:58.433 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:58 vm08 bash[17774]: cluster 2026-03-09T18:21:57.093443+0000 mon.a (mon.0) 474 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-09T18:21:58.433 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:58 vm08 bash[17774]: audit 2026-03-09T18:21:57.094291+0000 mon.a (mon.0) 475 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:21:58.433 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:58 vm08 bash[17774]: audit 2026-03-09T18:21:57.095649+0000 mon.a (mon.0) 476 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:21:58.433 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:58 vm08 bash[17774]: audit 2026-03-09T18:21:57.097091+0000 mon.b (mon.2) 22 : audit [INF] from='osd.6 [v2:192.168.123.108:6816/3870182675,v1:192.168.123.108:6817/3870182675]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:21:58.433 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:58 vm08 bash[17774]: audit 2026-03-09T18:21:57.187319+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:21:58.433 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:58 vm08 bash[17774]: audit 2026-03-09T18:21:57.194289+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:21:58.433 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:58 vm08 bash[17774]: audit 2026-03-09T18:21:57.195278+0000 mon.a (mon.0) 479 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:58.433 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:58 vm08 bash[17774]: audit 2026-03-09T18:21:57.195821+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:21:58.434 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:21:58 vm08 bash[27102]: debug 2026-03-09T18:21:58.101+0000 7f53bfc4b700 -1 osd.6 0 waiting for initial osdmap 2026-03-09T18:21:58.434 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:21:58 vm08 bash[27102]: debug 2026-03-09T18:21:58.121+0000 7f53b9de1700 -1 osd.6 41 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:21:59.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:59 vm00 bash[22468]: audit 2026-03-09T18:21:58.101222+0000 mon.a (mon.0) 481 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-09T18:21:59.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:59 vm00 bash[22468]: cluster 2026-03-09T18:21:58.101393+0000 mon.a (mon.0) 482 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-09T18:21:59.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:59 vm00 bash[22468]: audit 2026-03-09T18:21:58.102328+0000 mon.a (mon.0) 483 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:21:59.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:59 vm00 bash[22468]: audit 2026-03-09T18:21:58.119285+0000 mon.a (mon.0) 484 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:21:59.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:59 vm00 bash[22468]: cluster 2026-03-09T18:21:58.392022+0000 mgr.y (mgr.14152) 117 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:21:59.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:59 vm00 bash[22468]: audit 2026-03-09T18:21:58.628789+0000 mgr.y (mgr.14152) 118 : audit [DBG] from='client.24281 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:21:59.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:59 vm00 bash[22468]: audit 2026-03-09T18:21:58.630441+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:21:59.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:59 vm00 bash[22468]: audit 2026-03-09T18:21:58.631975+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:21:59.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:21:59 vm00 bash[22468]: audit 2026-03-09T18:21:58.632336+0000 mon.a (mon.0) 487 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:59.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:59 vm00 bash[17468]: audit 2026-03-09T18:21:58.101222+0000 mon.a (mon.0) 481 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-09T18:21:59.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:59 vm00 bash[17468]: cluster 2026-03-09T18:21:58.101393+0000 mon.a (mon.0) 482 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-09T18:21:59.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:59 vm00 bash[17468]: audit 2026-03-09T18:21:58.102328+0000 mon.a (mon.0) 483 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:21:59.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:59 vm00 bash[17468]: audit 2026-03-09T18:21:58.119285+0000 mon.a (mon.0) 484 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:21:59.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:59 vm00 bash[17468]: cluster 2026-03-09T18:21:58.392022+0000 mgr.y (mgr.14152) 117 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:21:59.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:59 vm00 bash[17468]: audit 2026-03-09T18:21:58.628789+0000 mgr.y (mgr.14152) 118 : audit [DBG] from='client.24281 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:21:59.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:59 vm00 bash[17468]: audit 2026-03-09T18:21:58.630441+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:21:59.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:59 vm00 bash[17468]: audit 2026-03-09T18:21:58.631975+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:21:59.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:21:59 vm00 bash[17468]: audit 2026-03-09T18:21:58.632336+0000 mon.a (mon.0) 487 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:21:59.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:59 vm08 bash[17774]: audit 2026-03-09T18:21:58.101222+0000 mon.a (mon.0) 481 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-09T18:21:59.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:59 vm08 bash[17774]: cluster 2026-03-09T18:21:58.101393+0000 mon.a (mon.0) 482 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-09T18:21:59.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:59 vm08 bash[17774]: audit 2026-03-09T18:21:58.102328+0000 mon.a (mon.0) 483 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:21:59.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:59 vm08 bash[17774]: audit 2026-03-09T18:21:58.119285+0000 mon.a (mon.0) 484 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:21:59.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:59 vm08 bash[17774]: cluster 2026-03-09T18:21:58.392022+0000 mgr.y (mgr.14152) 117 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-09T18:21:59.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:59 vm08 bash[17774]: audit 2026-03-09T18:21:58.628789+0000 mgr.y (mgr.14152) 118 : audit [DBG] from='client.24281 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:21:59.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:59 vm08 bash[17774]: audit 2026-03-09T18:21:58.630441+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:21:59.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:59 vm08 bash[17774]: audit 2026-03-09T18:21:58.631975+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:21:59.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:21:59 vm08 bash[17774]: audit 2026-03-09T18:21:58.632336+0000 mon.a (mon.0) 487 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:22:00.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:00 vm00 bash[22468]: cluster 2026-03-09T18:21:57.459259+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:22:00.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:00 vm00 bash[22468]: cluster 2026-03-09T18:21:57.459453+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:22:00.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:00 vm00 bash[22468]: cluster 2026-03-09T18:21:59.103835+0000 mon.a (mon.0) 488 : cluster [INF] osd.6 [v2:192.168.123.108:6816/3870182675,v1:192.168.123.108:6817/3870182675] boot 2026-03-09T18:22:00.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:00 vm00 bash[22468]: cluster 2026-03-09T18:21:59.104002+0000 mon.a (mon.0) 489 : cluster [DBG] osdmap e42: 7 total, 7 up, 7 in 2026-03-09T18:22:00.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:00 vm00 bash[22468]: audit 2026-03-09T18:21:59.105400+0000 mon.a (mon.0) 490 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:22:00.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:00 vm00 bash[22468]: cluster 2026-03-09T18:22:00.106459+0000 mon.a (mon.0) 491 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-09T18:22:00.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:00 vm00 bash[17468]: cluster 2026-03-09T18:21:57.459259+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:22:00.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:00 vm00 bash[17468]: cluster 2026-03-09T18:21:57.459453+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:22:00.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:00 vm00 bash[17468]: cluster 2026-03-09T18:21:59.103835+0000 mon.a (mon.0) 488 : cluster [INF] osd.6 [v2:192.168.123.108:6816/3870182675,v1:192.168.123.108:6817/3870182675] boot 2026-03-09T18:22:00.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:00 vm00 bash[17468]: cluster 2026-03-09T18:21:59.104002+0000 mon.a (mon.0) 489 : cluster [DBG] osdmap e42: 7 total, 7 up, 7 in 2026-03-09T18:22:00.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:00 vm00 bash[17468]: audit 2026-03-09T18:21:59.105400+0000 mon.a (mon.0) 490 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:22:00.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:00 vm00 bash[17468]: cluster 2026-03-09T18:22:00.106459+0000 mon.a (mon.0) 491 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-09T18:22:00.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:00 vm08 bash[17774]: cluster 2026-03-09T18:21:57.459259+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:22:00.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:00 vm08 bash[17774]: cluster 2026-03-09T18:21:57.459453+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:22:00.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:00 vm08 bash[17774]: cluster 2026-03-09T18:21:59.103835+0000 mon.a (mon.0) 488 : cluster [INF] osd.6 [v2:192.168.123.108:6816/3870182675,v1:192.168.123.108:6817/3870182675] boot 2026-03-09T18:22:00.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:00 vm08 bash[17774]: cluster 2026-03-09T18:21:59.104002+0000 mon.a (mon.0) 489 : cluster [DBG] osdmap e42: 7 total, 7 up, 7 in 2026-03-09T18:22:00.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:00 vm08 bash[17774]: audit 2026-03-09T18:21:59.105400+0000 mon.a (mon.0) 490 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:22:00.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:00 vm08 bash[17774]: cluster 2026-03-09T18:22:00.106459+0000 mon.a (mon.0) 491 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-09T18:22:01.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:01 vm00 bash[17468]: cluster 2026-03-09T18:22:00.392333+0000 mgr.y (mgr.14152) 119 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 40 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:22:01.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:01 vm00 bash[17468]: cluster 2026-03-09T18:22:01.104823+0000 mon.a (mon.0) 492 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-09T18:22:01.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:01 vm00 bash[22468]: cluster 2026-03-09T18:22:00.392333+0000 mgr.y (mgr.14152) 119 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 40 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:22:01.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:01 vm00 bash[22468]: cluster 2026-03-09T18:22:01.104823+0000 mon.a (mon.0) 492 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-09T18:22:01.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:01 vm08 bash[17774]: cluster 2026-03-09T18:22:00.392333+0000 mgr.y (mgr.14152) 119 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 40 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:22:01.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:01 vm08 bash[17774]: cluster 2026-03-09T18:22:01.104823+0000 mon.a (mon.0) 492 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-09T18:22:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:03 vm08 bash[17774]: cluster 2026-03-09T18:22:02.392608+0000 mgr.y (mgr.14152) 120 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:22:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:03 vm08 bash[17774]: cephadm 2026-03-09T18:22:02.652888+0000 mgr.y (mgr.14152) 121 : cephadm [INF] Detected new or changed devices on vm08 2026-03-09T18:22:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:03 vm08 bash[17774]: audit 2026-03-09T18:22:02.658654+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:03 vm08 bash[17774]: audit 2026-03-09T18:22:02.659493+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:03 vm08 bash[17774]: audit 2026-03-09T18:22:02.659984+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:03 vm08 bash[17774]: audit 2026-03-09T18:22:02.660394+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:03 vm08 bash[17774]: cephadm 2026-03-09T18:22:02.660701+0000 mgr.y (mgr.14152) 122 : cephadm [INF] Adjusting osd_memory_target on vm08 to 151.9M 2026-03-09T18:22:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:03 vm08 bash[17774]: cephadm 2026-03-09T18:22:02.661132+0000 mgr.y (mgr.14152) 123 : cephadm [WRN] Unable to set osd_memory_target on vm08 to 159305318: error parsing value: Value '159305318' is below minimum 939524096 2026-03-09T18:22:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:03 vm08 bash[17774]: audit 2026-03-09T18:22:02.692898+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:03 vm08 bash[17774]: audit 2026-03-09T18:22:03.206317+0000 mon.a (mon.0) 498 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e8972f61-b0b9-45d8-8b8e-e660f598240a"}]: dispatch 2026-03-09T18:22:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:03 vm08 bash[17774]: audit 2026-03-09T18:22:03.207568+0000 mon.b (mon.2) 23 : audit [INF] from='client.? 192.168.123.108:0/1273701773' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e8972f61-b0b9-45d8-8b8e-e660f598240a"}]: dispatch 2026-03-09T18:22:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:03 vm08 bash[17774]: audit 2026-03-09T18:22:03.214007+0000 mon.a (mon.0) 499 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e8972f61-b0b9-45d8-8b8e-e660f598240a"}]': finished 2026-03-09T18:22:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:03 vm08 bash[17774]: cluster 2026-03-09T18:22:03.214063+0000 mon.a (mon.0) 500 : cluster [DBG] osdmap e45: 8 total, 7 up, 8 in 2026-03-09T18:22:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:03 vm08 bash[17774]: audit 2026-03-09T18:22:03.214147+0000 mon.a (mon.0) 501 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:22:04.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:03 vm00 bash[22468]: cluster 2026-03-09T18:22:02.392608+0000 mgr.y (mgr.14152) 120 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:22:04.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:03 vm00 bash[22468]: cephadm 2026-03-09T18:22:02.652888+0000 mgr.y (mgr.14152) 121 : cephadm [INF] Detected new or changed devices on vm08 2026-03-09T18:22:04.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:03 vm00 bash[22468]: audit 2026-03-09T18:22:02.658654+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:04.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:03 vm00 bash[22468]: audit 2026-03-09T18:22:02.659493+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:04.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:03 vm00 bash[22468]: audit 2026-03-09T18:22:02.659984+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:04.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:03 vm00 bash[22468]: audit 2026-03-09T18:22:02.660394+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:04.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:03 vm00 bash[22468]: cephadm 2026-03-09T18:22:02.660701+0000 mgr.y (mgr.14152) 122 : cephadm [INF] Adjusting osd_memory_target on vm08 to 151.9M 2026-03-09T18:22:04.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:03 vm00 bash[22468]: cephadm 2026-03-09T18:22:02.661132+0000 mgr.y (mgr.14152) 123 : cephadm [WRN] Unable to set osd_memory_target on vm08 to 159305318: error parsing value: Value '159305318' is below minimum 939524096 2026-03-09T18:22:04.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:03 vm00 bash[22468]: audit 2026-03-09T18:22:02.692898+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:04.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:03 vm00 bash[22468]: audit 2026-03-09T18:22:03.206317+0000 mon.a (mon.0) 498 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e8972f61-b0b9-45d8-8b8e-e660f598240a"}]: dispatch 2026-03-09T18:22:04.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:03 vm00 bash[22468]: audit 2026-03-09T18:22:03.207568+0000 mon.b (mon.2) 23 : audit [INF] from='client.? 192.168.123.108:0/1273701773' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e8972f61-b0b9-45d8-8b8e-e660f598240a"}]: dispatch 2026-03-09T18:22:04.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:03 vm00 bash[22468]: audit 2026-03-09T18:22:03.214007+0000 mon.a (mon.0) 499 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e8972f61-b0b9-45d8-8b8e-e660f598240a"}]': finished 2026-03-09T18:22:04.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:03 vm00 bash[22468]: cluster 2026-03-09T18:22:03.214063+0000 mon.a (mon.0) 500 : cluster [DBG] osdmap e45: 8 total, 7 up, 8 in 2026-03-09T18:22:04.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:03 vm00 bash[22468]: audit 2026-03-09T18:22:03.214147+0000 mon.a (mon.0) 501 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:22:04.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:03 vm00 bash[17468]: cluster 2026-03-09T18:22:02.392608+0000 mgr.y (mgr.14152) 120 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:22:04.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:03 vm00 bash[17468]: cephadm 2026-03-09T18:22:02.652888+0000 mgr.y (mgr.14152) 121 : cephadm [INF] Detected new or changed devices on vm08 2026-03-09T18:22:04.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:03 vm00 bash[17468]: audit 2026-03-09T18:22:02.658654+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:04.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:03 vm00 bash[17468]: audit 2026-03-09T18:22:02.659493+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:04.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:03 vm00 bash[17468]: audit 2026-03-09T18:22:02.659984+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:04.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:03 vm00 bash[17468]: audit 2026-03-09T18:22:02.660394+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:04.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:03 vm00 bash[17468]: cephadm 2026-03-09T18:22:02.660701+0000 mgr.y (mgr.14152) 122 : cephadm [INF] Adjusting osd_memory_target on vm08 to 151.9M 2026-03-09T18:22:04.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:03 vm00 bash[17468]: cephadm 2026-03-09T18:22:02.661132+0000 mgr.y (mgr.14152) 123 : cephadm [WRN] Unable to set osd_memory_target on vm08 to 159305318: error parsing value: Value '159305318' is below minimum 939524096 2026-03-09T18:22:04.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:03 vm00 bash[17468]: audit 2026-03-09T18:22:02.692898+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:04.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:03 vm00 bash[17468]: audit 2026-03-09T18:22:03.206317+0000 mon.a (mon.0) 498 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e8972f61-b0b9-45d8-8b8e-e660f598240a"}]: dispatch 2026-03-09T18:22:04.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:03 vm00 bash[17468]: audit 2026-03-09T18:22:03.207568+0000 mon.b (mon.2) 23 : audit [INF] from='client.? 192.168.123.108:0/1273701773' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e8972f61-b0b9-45d8-8b8e-e660f598240a"}]: dispatch 2026-03-09T18:22:04.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:03 vm00 bash[17468]: audit 2026-03-09T18:22:03.214007+0000 mon.a (mon.0) 499 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e8972f61-b0b9-45d8-8b8e-e660f598240a"}]': finished 2026-03-09T18:22:04.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:03 vm00 bash[17468]: cluster 2026-03-09T18:22:03.214063+0000 mon.a (mon.0) 500 : cluster [DBG] osdmap e45: 8 total, 7 up, 8 in 2026-03-09T18:22:04.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:03 vm00 bash[17468]: audit 2026-03-09T18:22:03.214147+0000 mon.a (mon.0) 501 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:22:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:04 vm08 bash[17774]: audit 2026-03-09T18:22:03.921471+0000 mon.b (mon.2) 24 : audit [DBG] from='client.? 192.168.123.108:0/628298863' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:22:05.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:04 vm00 bash[22468]: audit 2026-03-09T18:22:03.921471+0000 mon.b (mon.2) 24 : audit [DBG] from='client.? 192.168.123.108:0/628298863' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:22:05.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:04 vm00 bash[17468]: audit 2026-03-09T18:22:03.921471+0000 mon.b (mon.2) 24 : audit [DBG] from='client.? 192.168.123.108:0/628298863' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:22:05.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:05 vm08 bash[17774]: cluster 2026-03-09T18:22:04.392946+0000 mgr.y (mgr.14152) 124 : cluster [DBG] pgmap v107: 1 pgs: 1 active+recovering; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:22:06.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:05 vm00 bash[22468]: cluster 2026-03-09T18:22:04.392946+0000 mgr.y (mgr.14152) 124 : cluster [DBG] pgmap v107: 1 pgs: 1 active+recovering; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:22:06.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:05 vm00 bash[17468]: cluster 2026-03-09T18:22:04.392946+0000 mgr.y (mgr.14152) 124 : cluster [DBG] pgmap v107: 1 pgs: 1 active+recovering; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:22:07.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:07 vm00 bash[22468]: cluster 2026-03-09T18:22:06.393229+0000 mgr.y (mgr.14152) 125 : cluster [DBG] pgmap v108: 1 pgs: 1 active+recovering; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:22:07.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:07 vm00 bash[17468]: cluster 2026-03-09T18:22:06.393229+0000 mgr.y (mgr.14152) 125 : cluster [DBG] pgmap v108: 1 pgs: 1 active+recovering; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:22:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:07 vm08 bash[17774]: cluster 2026-03-09T18:22:06.393229+0000 mgr.y (mgr.14152) 125 : cluster [DBG] pgmap v108: 1 pgs: 1 active+recovering; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:22:09.754 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:09 vm08 bash[17774]: cluster 2026-03-09T18:22:08.393512+0000 mgr.y (mgr.14152) 126 : cluster [DBG] pgmap v109: 1 pgs: 1 active+recovering; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:22:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:09 vm00 bash[22468]: cluster 2026-03-09T18:22:08.393512+0000 mgr.y (mgr.14152) 126 : cluster [DBG] pgmap v109: 1 pgs: 1 active+recovering; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:22:09.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:09 vm00 bash[17468]: cluster 2026-03-09T18:22:08.393512+0000 mgr.y (mgr.14152) 126 : cluster [DBG] pgmap v109: 1 pgs: 1 active+recovering; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:22:10.441 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:10 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:10.441 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:10 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:10.441 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:22:10 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:10.441 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:22:10 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:10.441 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:22:10 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:10.691 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:10 vm08 bash[17774]: audit 2026-03-09T18:22:09.651671+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T18:22:10.691 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:10 vm08 bash[17774]: audit 2026-03-09T18:22:09.652234+0000 mon.a (mon.0) 503 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:22:10.691 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:10 vm08 bash[17774]: cephadm 2026-03-09T18:22:09.652626+0000 mgr.y (mgr.14152) 127 : cephadm [INF] Deploying daemon osd.7 on vm08 2026-03-09T18:22:10.691 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:10 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:10.691 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:10 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:10.692 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:22:10 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:10.692 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:22:10 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:10.692 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:22:10 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:10.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:10 vm00 bash[22468]: audit 2026-03-09T18:22:09.651671+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T18:22:10.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:10 vm00 bash[22468]: audit 2026-03-09T18:22:09.652234+0000 mon.a (mon.0) 503 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:22:10.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:10 vm00 bash[22468]: cephadm 2026-03-09T18:22:09.652626+0000 mgr.y (mgr.14152) 127 : cephadm [INF] Deploying daemon osd.7 on vm08 2026-03-09T18:22:10.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:10 vm00 bash[17468]: audit 2026-03-09T18:22:09.651671+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T18:22:10.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:10 vm00 bash[17468]: audit 2026-03-09T18:22:09.652234+0000 mon.a (mon.0) 503 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:22:10.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:10 vm00 bash[17468]: cephadm 2026-03-09T18:22:09.652626+0000 mgr.y (mgr.14152) 127 : cephadm [INF] Deploying daemon osd.7 on vm08 2026-03-09T18:22:11.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:11 vm08 bash[17774]: cluster 2026-03-09T18:22:10.393750+0000 mgr.y (mgr.14152) 128 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:22:11.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:11 vm08 bash[17774]: audit 2026-03-09T18:22:10.668487+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:11.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:11 vm08 bash[17774]: audit 2026-03-09T18:22:10.671474+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:22:11.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:11 vm08 bash[17774]: audit 2026-03-09T18:22:10.673479+0000 mon.a (mon.0) 506 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:22:11.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:11 vm08 bash[17774]: audit 2026-03-09T18:22:10.674478+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:22:12.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:11 vm00 bash[22468]: cluster 2026-03-09T18:22:10.393750+0000 mgr.y (mgr.14152) 128 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:22:12.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:11 vm00 bash[22468]: audit 2026-03-09T18:22:10.668487+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:12.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:11 vm00 bash[22468]: audit 2026-03-09T18:22:10.671474+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:22:12.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:11 vm00 bash[22468]: audit 2026-03-09T18:22:10.673479+0000 mon.a (mon.0) 506 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:22:12.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:11 vm00 bash[22468]: audit 2026-03-09T18:22:10.674478+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:22:12.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:11 vm00 bash[17468]: cluster 2026-03-09T18:22:10.393750+0000 mgr.y (mgr.14152) 128 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:22:12.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:11 vm00 bash[17468]: audit 2026-03-09T18:22:10.668487+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:12.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:11 vm00 bash[17468]: audit 2026-03-09T18:22:10.671474+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:22:12.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:11 vm00 bash[17468]: audit 2026-03-09T18:22:10.673479+0000 mon.a (mon.0) 506 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:22:12.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:11 vm00 bash[17468]: audit 2026-03-09T18:22:10.674478+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:22:13.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:13 vm08 bash[17774]: cluster 2026-03-09T18:22:12.394005+0000 mgr.y (mgr.14152) 129 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:22:14.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:13 vm00 bash[17468]: cluster 2026-03-09T18:22:12.394005+0000 mgr.y (mgr.14152) 129 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:22:14.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:13 vm00 bash[22468]: cluster 2026-03-09T18:22:12.394005+0000 mgr.y (mgr.14152) 129 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:22:14.192 INFO:teuthology.orchestra.run.vm08.stdout:Created osd(s) 7 on host 'vm08' 2026-03-09T18:22:14.255 DEBUG:teuthology.orchestra.run.vm08:osd.7> sudo journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.7.service 2026-03-09T18:22:14.256 INFO:tasks.cephadm:Waiting for 8 OSDs to come up... 2026-03-09T18:22:14.256 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph osd stat -f json 2026-03-09T18:22:14.781 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:22:14.848 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":46,"num_osds":8,"num_up_osds":7,"osd_up_since":1773080519,"num_in_osds":8,"osd_in_since":1773080523,"num_remapped_pgs":0} 2026-03-09T18:22:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:14 vm08 bash[17774]: audit 2026-03-09T18:22:13.692229+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:14 vm08 bash[17774]: audit 2026-03-09T18:22:13.697593+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:14 vm08 bash[17774]: audit 2026-03-09T18:22:14.002084+0000 mon.c (mon.1) 11 : audit [INF] from='osd.7 [v2:192.168.123.108:6824/1101522923,v1:192.168.123.108:6825/1101522923]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T18:22:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:14 vm08 bash[17774]: audit 2026-03-09T18:22:14.002500+0000 mon.a (mon.0) 510 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T18:22:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:14 vm08 bash[17774]: audit 2026-03-09T18:22:14.189223+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:14 vm08 bash[17774]: audit 2026-03-09T18:22:14.214789+0000 mon.a (mon.0) 512 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:22:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:14 vm08 bash[17774]: audit 2026-03-09T18:22:14.216055+0000 mon.a (mon.0) 513 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:22:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:14 vm08 bash[17774]: audit 2026-03-09T18:22:14.216742+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:22:15.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:14 vm00 bash[22468]: audit 2026-03-09T18:22:13.692229+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:15.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:14 vm00 bash[22468]: audit 2026-03-09T18:22:13.697593+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:15.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:14 vm00 bash[22468]: audit 2026-03-09T18:22:14.002084+0000 mon.c (mon.1) 11 : audit [INF] from='osd.7 [v2:192.168.123.108:6824/1101522923,v1:192.168.123.108:6825/1101522923]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T18:22:15.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:14 vm00 bash[22468]: audit 2026-03-09T18:22:14.002500+0000 mon.a (mon.0) 510 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T18:22:15.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:14 vm00 bash[22468]: audit 2026-03-09T18:22:14.189223+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:15.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:14 vm00 bash[22468]: audit 2026-03-09T18:22:14.214789+0000 mon.a (mon.0) 512 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:22:15.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:14 vm00 bash[22468]: audit 2026-03-09T18:22:14.216055+0000 mon.a (mon.0) 513 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:22:15.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:14 vm00 bash[22468]: audit 2026-03-09T18:22:14.216742+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:22:15.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:14 vm00 bash[17468]: audit 2026-03-09T18:22:13.692229+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:15.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:14 vm00 bash[17468]: audit 2026-03-09T18:22:13.697593+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:15.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:14 vm00 bash[17468]: audit 2026-03-09T18:22:14.002084+0000 mon.c (mon.1) 11 : audit [INF] from='osd.7 [v2:192.168.123.108:6824/1101522923,v1:192.168.123.108:6825/1101522923]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T18:22:15.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:14 vm00 bash[17468]: audit 2026-03-09T18:22:14.002500+0000 mon.a (mon.0) 510 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T18:22:15.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:14 vm00 bash[17468]: audit 2026-03-09T18:22:14.189223+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:15.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:14 vm00 bash[17468]: audit 2026-03-09T18:22:14.214789+0000 mon.a (mon.0) 512 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:22:15.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:14 vm00 bash[17468]: audit 2026-03-09T18:22:14.216055+0000 mon.a (mon.0) 513 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:22:15.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:14 vm00 bash[17468]: audit 2026-03-09T18:22:14.216742+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:22:15.849 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph osd stat -f json 2026-03-09T18:22:15.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:15 vm08 bash[17774]: cluster 2026-03-09T18:22:14.394345+0000 mgr.y (mgr.14152) 130 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:22:15.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:15 vm08 bash[17774]: audit 2026-03-09T18:22:14.705393+0000 mon.a (mon.0) 515 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T18:22:15.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:15 vm08 bash[17774]: cluster 2026-03-09T18:22:14.705418+0000 mon.a (mon.0) 516 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-09T18:22:15.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:15 vm08 bash[17774]: audit 2026-03-09T18:22:14.706018+0000 mon.a (mon.0) 517 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:22:15.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:15 vm08 bash[17774]: audit 2026-03-09T18:22:14.709741+0000 mon.c (mon.1) 12 : audit [INF] from='osd.7 [v2:192.168.123.108:6824/1101522923,v1:192.168.123.108:6825/1101522923]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:22:15.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:15 vm08 bash[17774]: audit 2026-03-09T18:22:14.710080+0000 mon.a (mon.0) 518 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:22:15.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:15 vm08 bash[17774]: audit 2026-03-09T18:22:14.779162+0000 mon.a (mon.0) 519 : audit [DBG] from='client.? 192.168.123.100:0/1557372604' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T18:22:15.975 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:22:15 vm08 bash[30271]: debug 2026-03-09T18:22:15.717+0000 7fe13af72700 -1 osd.7 0 waiting for initial osdmap 2026-03-09T18:22:15.975 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:22:15 vm08 bash[30271]: debug 2026-03-09T18:22:15.733+0000 7fe13610a700 -1 osd.7 47 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:22:15.993 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:15 vm00 bash[22468]: cluster 2026-03-09T18:22:14.394345+0000 mgr.y (mgr.14152) 130 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:22:15.993 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:15 vm00 bash[22468]: audit 2026-03-09T18:22:14.705393+0000 mon.a (mon.0) 515 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T18:22:15.993 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:15 vm00 bash[22468]: cluster 2026-03-09T18:22:14.705418+0000 mon.a (mon.0) 516 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-09T18:22:15.993 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:15 vm00 bash[22468]: audit 2026-03-09T18:22:14.706018+0000 mon.a (mon.0) 517 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:22:15.993 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:15 vm00 bash[22468]: audit 2026-03-09T18:22:14.709741+0000 mon.c (mon.1) 12 : audit [INF] from='osd.7 [v2:192.168.123.108:6824/1101522923,v1:192.168.123.108:6825/1101522923]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:22:15.993 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:15 vm00 bash[22468]: audit 2026-03-09T18:22:14.710080+0000 mon.a (mon.0) 518 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:22:15.993 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:15 vm00 bash[22468]: audit 2026-03-09T18:22:14.779162+0000 mon.a (mon.0) 519 : audit [DBG] from='client.? 192.168.123.100:0/1557372604' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T18:22:15.993 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:15 vm00 bash[17468]: cluster 2026-03-09T18:22:14.394345+0000 mgr.y (mgr.14152) 130 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:22:15.993 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:15 vm00 bash[17468]: audit 2026-03-09T18:22:14.705393+0000 mon.a (mon.0) 515 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T18:22:15.993 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:15 vm00 bash[17468]: cluster 2026-03-09T18:22:14.705418+0000 mon.a (mon.0) 516 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-09T18:22:15.993 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:15 vm00 bash[17468]: audit 2026-03-09T18:22:14.706018+0000 mon.a (mon.0) 517 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:22:15.993 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:15 vm00 bash[17468]: audit 2026-03-09T18:22:14.709741+0000 mon.c (mon.1) 12 : audit [INF] from='osd.7 [v2:192.168.123.108:6824/1101522923,v1:192.168.123.108:6825/1101522923]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:22:15.993 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:15 vm00 bash[17468]: audit 2026-03-09T18:22:14.710080+0000 mon.a (mon.0) 518 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:22:15.993 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:15 vm00 bash[17468]: audit 2026-03-09T18:22:14.779162+0000 mon.a (mon.0) 519 : audit [DBG] from='client.? 192.168.123.100:0/1557372604' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T18:22:16.321 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:22:16.377 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":48,"num_osds":8,"num_up_osds":8,"osd_up_since":1773080536,"num_in_osds":8,"osd_in_since":1773080523,"num_remapped_pgs":1} 2026-03-09T18:22:16.377 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph osd dump --format=json 2026-03-09T18:22:16.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:16 vm08 bash[17774]: audit 2026-03-09T18:22:15.713455+0000 mon.a (mon.0) 520 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-09T18:22:16.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:16 vm08 bash[17774]: cluster 2026-03-09T18:22:15.713661+0000 mon.a (mon.0) 521 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-09T18:22:16.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:16 vm08 bash[17774]: audit 2026-03-09T18:22:15.714668+0000 mon.a (mon.0) 522 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:22:16.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:16 vm08 bash[17774]: audit 2026-03-09T18:22:15.721767+0000 mon.a (mon.0) 523 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:22:16.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:16 vm08 bash[17774]: cluster 2026-03-09T18:22:16.094808+0000 mon.a (mon.0) 524 : cluster [INF] osd.7 [v2:192.168.123.108:6824/1101522923,v1:192.168.123.108:6825/1101522923] boot 2026-03-09T18:22:16.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:16 vm08 bash[17774]: cluster 2026-03-09T18:22:16.095108+0000 mon.a (mon.0) 525 : cluster [DBG] osdmap e48: 8 total, 8 up, 8 in 2026-03-09T18:22:16.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:16 vm08 bash[17774]: audit 2026-03-09T18:22:16.096310+0000 mon.a (mon.0) 526 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:22:16.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:16 vm08 bash[17774]: audit 2026-03-09T18:22:16.319135+0000 mon.a (mon.0) 527 : audit [DBG] from='client.? 192.168.123.100:0/1038311312' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T18:22:17.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:16 vm00 bash[22468]: audit 2026-03-09T18:22:15.713455+0000 mon.a (mon.0) 520 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-09T18:22:17.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:16 vm00 bash[22468]: cluster 2026-03-09T18:22:15.713661+0000 mon.a (mon.0) 521 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-09T18:22:17.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:16 vm00 bash[22468]: audit 2026-03-09T18:22:15.714668+0000 mon.a (mon.0) 522 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:22:17.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:16 vm00 bash[22468]: audit 2026-03-09T18:22:15.721767+0000 mon.a (mon.0) 523 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:22:17.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:16 vm00 bash[22468]: cluster 2026-03-09T18:22:16.094808+0000 mon.a (mon.0) 524 : cluster [INF] osd.7 [v2:192.168.123.108:6824/1101522923,v1:192.168.123.108:6825/1101522923] boot 2026-03-09T18:22:17.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:16 vm00 bash[22468]: cluster 2026-03-09T18:22:16.095108+0000 mon.a (mon.0) 525 : cluster [DBG] osdmap e48: 8 total, 8 up, 8 in 2026-03-09T18:22:17.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:16 vm00 bash[22468]: audit 2026-03-09T18:22:16.096310+0000 mon.a (mon.0) 526 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:22:17.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:16 vm00 bash[22468]: audit 2026-03-09T18:22:16.319135+0000 mon.a (mon.0) 527 : audit [DBG] from='client.? 192.168.123.100:0/1038311312' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T18:22:17.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:16 vm00 bash[17468]: audit 2026-03-09T18:22:15.713455+0000 mon.a (mon.0) 520 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-09T18:22:17.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:16 vm00 bash[17468]: cluster 2026-03-09T18:22:15.713661+0000 mon.a (mon.0) 521 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-09T18:22:17.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:16 vm00 bash[17468]: audit 2026-03-09T18:22:15.714668+0000 mon.a (mon.0) 522 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:22:17.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:16 vm00 bash[17468]: audit 2026-03-09T18:22:15.721767+0000 mon.a (mon.0) 523 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:22:17.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:16 vm00 bash[17468]: cluster 2026-03-09T18:22:16.094808+0000 mon.a (mon.0) 524 : cluster [INF] osd.7 [v2:192.168.123.108:6824/1101522923,v1:192.168.123.108:6825/1101522923] boot 2026-03-09T18:22:17.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:16 vm00 bash[17468]: cluster 2026-03-09T18:22:16.095108+0000 mon.a (mon.0) 525 : cluster [DBG] osdmap e48: 8 total, 8 up, 8 in 2026-03-09T18:22:17.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:16 vm00 bash[17468]: audit 2026-03-09T18:22:16.096310+0000 mon.a (mon.0) 526 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:22:17.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:16 vm00 bash[17468]: audit 2026-03-09T18:22:16.319135+0000 mon.a (mon.0) 527 : audit [DBG] from='client.? 192.168.123.100:0/1038311312' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T18:22:18.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:17 vm00 bash[22468]: cluster 2026-03-09T18:22:15.038588+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:22:18.138 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:17 vm00 bash[22468]: cluster 2026-03-09T18:22:15.038729+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:22:18.138 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:17 vm00 bash[22468]: cluster 2026-03-09T18:22:16.394666+0000 mgr.y (mgr.14152) 131 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:22:18.138 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:17 vm00 bash[22468]: cluster 2026-03-09T18:22:17.094902+0000 mon.a (mon.0) 528 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-09T18:22:18.138 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:17 vm00 bash[17468]: cluster 2026-03-09T18:22:15.038588+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:22:18.139 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:17 vm00 bash[17468]: cluster 2026-03-09T18:22:15.038729+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:22:18.139 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:17 vm00 bash[17468]: cluster 2026-03-09T18:22:16.394666+0000 mgr.y (mgr.14152) 131 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:22:18.139 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:17 vm00 bash[17468]: cluster 2026-03-09T18:22:17.094902+0000 mon.a (mon.0) 528 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-09T18:22:18.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:17 vm08 bash[17774]: cluster 2026-03-09T18:22:15.038588+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:22:18.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:17 vm08 bash[17774]: cluster 2026-03-09T18:22:15.038729+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:22:18.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:17 vm08 bash[17774]: cluster 2026-03-09T18:22:16.394666+0000 mgr.y (mgr.14152) 131 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-09T18:22:18.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:17 vm08 bash[17774]: cluster 2026-03-09T18:22:17.094902+0000 mon.a (mon.0) 528 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-09T18:22:18.991 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/mon.c/config 2026-03-09T18:22:19.343 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:22:19.343 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":50,"fsid":"614f4990-1be4-11f1-8b84-dfd1edd9d965","created":"2026-03-09T18:19:14.917724+0000","modified":"2026-03-09T18:22:18.088172+0000","last_up_change":"2026-03-09T18:22:16.082402+0000","last_in_change":"2026-03-09T18:22:03.206760+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"quincy","pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T18:20:56.568822+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"22","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}}}],"osds":[{"osd":0,"uuid":"b0cac7d6-07bf-4b00-9243-24f6ec5bc470","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":48,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":4034633438},{"type":"v1","addr":"192.168.123.100:6803","nonce":4034633438}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":4034633438},{"type":"v1","addr":"192.168.123.100:6805","nonce":4034633438}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":4034633438},{"type":"v1","addr":"192.168.123.100:6809","nonce":4034633438}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":4034633438},{"type":"v1","addr":"192.168.123.100:6807","nonce":4034633438}]},"public_addr":"192.168.123.100:6803/4034633438","cluster_addr":"192.168.123.100:6805/4034633438","heartbeat_back_addr":"192.168.123.100:6809/4034633438","heartbeat_front_addr":"192.168.123.100:6807/4034633438","state":["exists","up"]},{"osd":1,"uuid":"9fc1e6b3-451c-497e-a994-131046179fb9","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":31,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":3881919578},{"type":"v1","addr":"192.168.123.100:6811","nonce":3881919578}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6812","nonce":3881919578},{"type":"v1","addr":"192.168.123.100:6813","nonce":3881919578}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6816","nonce":3881919578},{"type":"v1","addr":"192.168.123.100:6817","nonce":3881919578}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":3881919578},{"type":"v1","addr":"192.168.123.100:6815","nonce":3881919578}]},"public_addr":"192.168.123.100:6811/3881919578","cluster_addr":"192.168.123.100:6813/3881919578","heartbeat_back_addr":"192.168.123.100:6817/3881919578","heartbeat_front_addr":"192.168.123.100:6815/3881919578","state":["exists","up"]},{"osd":2,"uuid":"b6754d4f-0b5b-4d48-8415-b590ff7d2cdb","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6818","nonce":1380134913},{"type":"v1","addr":"192.168.123.100:6819","nonce":1380134913}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6820","nonce":1380134913},{"type":"v1","addr":"192.168.123.100:6821","nonce":1380134913}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6824","nonce":1380134913},{"type":"v1","addr":"192.168.123.100:6825","nonce":1380134913}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6822","nonce":1380134913},{"type":"v1","addr":"192.168.123.100:6823","nonce":1380134913}]},"public_addr":"192.168.123.100:6819/1380134913","cluster_addr":"192.168.123.100:6821/1380134913","heartbeat_back_addr":"192.168.123.100:6825/1380134913","heartbeat_front_addr":"192.168.123.100:6823/1380134913","state":["exists","up"]},{"osd":3,"uuid":"04bdb6c0-c351-4b7e-b364-865748cfae11","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":25,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6826","nonce":51325005},{"type":"v1","addr":"192.168.123.100:6827","nonce":51325005}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6828","nonce":51325005},{"type":"v1","addr":"192.168.123.100:6829","nonce":51325005}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6832","nonce":51325005},{"type":"v1","addr":"192.168.123.100:6833","nonce":51325005}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6830","nonce":51325005},{"type":"v1","addr":"192.168.123.100:6831","nonce":51325005}]},"public_addr":"192.168.123.100:6827/51325005","cluster_addr":"192.168.123.100:6829/51325005","heartbeat_back_addr":"192.168.123.100:6833/51325005","heartbeat_front_addr":"192.168.123.100:6831/51325005","state":["exists","up"]},{"osd":4,"uuid":"28dbafde-327a-4cb7-aaf4-8f0bed8a7a21","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":30,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6800","nonce":3738925586},{"type":"v1","addr":"192.168.123.108:6801","nonce":3738925586}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6802","nonce":3738925586},{"type":"v1","addr":"192.168.123.108:6803","nonce":3738925586}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6806","nonce":3738925586},{"type":"v1","addr":"192.168.123.108:6807","nonce":3738925586}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6804","nonce":3738925586},{"type":"v1","addr":"192.168.123.108:6805","nonce":3738925586}]},"public_addr":"192.168.123.108:6801/3738925586","cluster_addr":"192.168.123.108:6803/3738925586","heartbeat_back_addr":"192.168.123.108:6807/3738925586","heartbeat_front_addr":"192.168.123.108:6805/3738925586","state":["exists","up"]},{"osd":5,"uuid":"c8fd35d5-49cd-4d8e-981a-afb708e47c9d","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":36,"up_thru":37,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6808","nonce":3115835875},{"type":"v1","addr":"192.168.123.108:6809","nonce":3115835875}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6810","nonce":3115835875},{"type":"v1","addr":"192.168.123.108:6811","nonce":3115835875}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6814","nonce":3115835875},{"type":"v1","addr":"192.168.123.108:6815","nonce":3115835875}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6812","nonce":3115835875},{"type":"v1","addr":"192.168.123.108:6813","nonce":3115835875}]},"public_addr":"192.168.123.108:6809/3115835875","cluster_addr":"192.168.123.108:6811/3115835875","heartbeat_back_addr":"192.168.123.108:6815/3115835875","heartbeat_front_addr":"192.168.123.108:6813/3115835875","state":["exists","up"]},{"osd":6,"uuid":"fdedf8fe-f1d9-48e7-9db9-df7cf33b1093","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":42,"up_thru":43,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6816","nonce":3870182675},{"type":"v1","addr":"192.168.123.108:6817","nonce":3870182675}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6818","nonce":3870182675},{"type":"v1","addr":"192.168.123.108:6819","nonce":3870182675}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6822","nonce":3870182675},{"type":"v1","addr":"192.168.123.108:6823","nonce":3870182675}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6820","nonce":3870182675},{"type":"v1","addr":"192.168.123.108:6821","nonce":3870182675}]},"public_addr":"192.168.123.108:6817/3870182675","cluster_addr":"192.168.123.108:6819/3870182675","heartbeat_back_addr":"192.168.123.108:6823/3870182675","heartbeat_front_addr":"192.168.123.108:6821/3870182675","state":["exists","up"]},{"osd":7,"uuid":"e8972f61-b0b9-45d8-8b8e-e660f598240a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":48,"up_thru":49,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6824","nonce":1101522923},{"type":"v1","addr":"192.168.123.108:6825","nonce":1101522923}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6826","nonce":1101522923},{"type":"v1","addr":"192.168.123.108:6827","nonce":1101522923}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6830","nonce":1101522923},{"type":"v1","addr":"192.168.123.108:6831","nonce":1101522923}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6828","nonce":1101522923},{"type":"v1","addr":"192.168.123.108:6829","nonce":1101522923}]},"public_addr":"192.168.123.108:6825/1101522923","cluster_addr":"192.168.123.108:6827/1101522923","heartbeat_back_addr":"192.168.123.108:6831/1101522923","heartbeat_front_addr":"192.168.123.108:6829/1101522923","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:20:22.289419+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:20:38.302448+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:20:53.686141+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:21:09.836659+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:21:25.246445+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:21:42.004834+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:21:57.459466+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:22:15.038732+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.100:0/2948627942":"2026-03-10T18:19:40.307892+0000","192.168.123.100:0/1939871250":"2026-03-10T18:19:40.307892+0000","192.168.123.100:6800/1514471438":"2026-03-10T18:19:40.307892+0000","192.168.123.100:6801/1514471438":"2026-03-10T18:19:40.307892+0000","192.168.123.100:0/2433128758":"2026-03-10T18:19:29.532285+0000","192.168.123.100:0/3360653556":"2026-03-10T18:19:40.307892+0000","192.168.123.100:0/4158221249":"2026-03-10T18:19:29.532285+0000","192.168.123.100:6801/4196289624":"2026-03-10T18:19:29.532285+0000","192.168.123.100:6800/4196289624":"2026-03-10T18:19:29.532285+0000","192.168.123.100:0/3505528954":"2026-03-10T18:19:29.532285+0000"},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T18:22:19.356 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:19 vm00 bash[22468]: cluster 2026-03-09T18:22:18.240726+0000 mon.a (mon.0) 529 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-09T18:22:19.356 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:19 vm00 bash[22468]: cluster 2026-03-09T18:22:18.394950+0000 mgr.y (mgr.14152) 132 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 47 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:19.356 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:19 vm00 bash[22468]: cephadm 2026-03-09T18:22:18.770952+0000 mgr.y (mgr.14152) 133 : cephadm [INF] Detected new or changed devices on vm08 2026-03-09T18:22:19.356 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:19 vm00 bash[22468]: audit 2026-03-09T18:22:18.787382+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:19.356 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:19 vm00 bash[22468]: audit 2026-03-09T18:22:18.788918+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:19.356 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:19 vm00 bash[22468]: audit 2026-03-09T18:22:18.790511+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:19.356 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:19 vm00 bash[22468]: audit 2026-03-09T18:22:18.791428+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:19.356 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:19 vm00 bash[22468]: audit 2026-03-09T18:22:18.792290+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:19.356 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:19 vm00 bash[22468]: cephadm 2026-03-09T18:22:18.792901+0000 mgr.y (mgr.14152) 134 : cephadm [INF] Adjusting osd_memory_target on vm08 to 113.9M 2026-03-09T18:22:19.356 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:19 vm00 bash[22468]: cephadm 2026-03-09T18:22:18.793575+0000 mgr.y (mgr.14152) 135 : cephadm [WRN] Unable to set osd_memory_target on vm08 to 119478988: error parsing value: Value '119478988' is below minimum 939524096 2026-03-09T18:22:19.356 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:19 vm00 bash[22468]: audit 2026-03-09T18:22:18.797822+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:19.356 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:19 vm00 bash[17468]: cluster 2026-03-09T18:22:18.240726+0000 mon.a (mon.0) 529 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-09T18:22:19.356 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:19 vm00 bash[17468]: cluster 2026-03-09T18:22:18.394950+0000 mgr.y (mgr.14152) 132 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 47 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:19.356 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:19 vm00 bash[17468]: cephadm 2026-03-09T18:22:18.770952+0000 mgr.y (mgr.14152) 133 : cephadm [INF] Detected new or changed devices on vm08 2026-03-09T18:22:19.356 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:19 vm00 bash[17468]: audit 2026-03-09T18:22:18.787382+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:19.356 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:19 vm00 bash[17468]: audit 2026-03-09T18:22:18.788918+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:19.356 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:19 vm00 bash[17468]: audit 2026-03-09T18:22:18.790511+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:19.356 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:19 vm00 bash[17468]: audit 2026-03-09T18:22:18.791428+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:19.357 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:19 vm00 bash[17468]: audit 2026-03-09T18:22:18.792290+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:19.357 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:19 vm00 bash[17468]: cephadm 2026-03-09T18:22:18.792901+0000 mgr.y (mgr.14152) 134 : cephadm [INF] Adjusting osd_memory_target on vm08 to 113.9M 2026-03-09T18:22:19.357 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:19 vm00 bash[17468]: cephadm 2026-03-09T18:22:18.793575+0000 mgr.y (mgr.14152) 135 : cephadm [WRN] Unable to set osd_memory_target on vm08 to 119478988: error parsing value: Value '119478988' is below minimum 939524096 2026-03-09T18:22:19.357 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:19 vm00 bash[17468]: audit 2026-03-09T18:22:18.797822+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:19.397 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-09T18:20:56.568822+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '22', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}}] 2026-03-09T18:22:19.397 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph osd pool get .mgr pg_num 2026-03-09T18:22:19.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:19 vm08 bash[17774]: cluster 2026-03-09T18:22:18.240726+0000 mon.a (mon.0) 529 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-09T18:22:19.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:19 vm08 bash[17774]: cluster 2026-03-09T18:22:18.394950+0000 mgr.y (mgr.14152) 132 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 47 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:19.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:19 vm08 bash[17774]: cephadm 2026-03-09T18:22:18.770952+0000 mgr.y (mgr.14152) 133 : cephadm [INF] Detected new or changed devices on vm08 2026-03-09T18:22:19.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:19 vm08 bash[17774]: audit 2026-03-09T18:22:18.787382+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:19.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:19 vm08 bash[17774]: audit 2026-03-09T18:22:18.788918+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:19.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:19 vm08 bash[17774]: audit 2026-03-09T18:22:18.790511+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:19.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:19 vm08 bash[17774]: audit 2026-03-09T18:22:18.791428+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:19.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:19 vm08 bash[17774]: audit 2026-03-09T18:22:18.792290+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:19.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:19 vm08 bash[17774]: cephadm 2026-03-09T18:22:18.792901+0000 mgr.y (mgr.14152) 134 : cephadm [INF] Adjusting osd_memory_target on vm08 to 113.9M 2026-03-09T18:22:19.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:19 vm08 bash[17774]: cephadm 2026-03-09T18:22:18.793575+0000 mgr.y (mgr.14152) 135 : cephadm [WRN] Unable to set osd_memory_target on vm08 to 119478988: error parsing value: Value '119478988' is below minimum 939524096 2026-03-09T18:22:19.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:19 vm08 bash[17774]: audit 2026-03-09T18:22:18.797822+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:20.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:20 vm00 bash[22468]: audit 2026-03-09T18:22:19.340833+0000 mon.c (mon.1) 13 : audit [DBG] from='client.? 192.168.123.100:0/1480827611' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:22:20.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:20 vm00 bash[17468]: audit 2026-03-09T18:22:19.340833+0000 mon.c (mon.1) 13 : audit [DBG] from='client.? 192.168.123.100:0/1480827611' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:22:20.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:20 vm08 bash[17774]: audit 2026-03-09T18:22:19.340833+0000 mon.c (mon.1) 13 : audit [DBG] from='client.? 192.168.123.100:0/1480827611' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:22:21.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:21 vm00 bash[22468]: cluster 2026-03-09T18:22:20.395240+0000 mgr.y (mgr.14152) 136 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:21.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:21 vm00 bash[17468]: cluster 2026-03-09T18:22:20.395240+0000 mgr.y (mgr.14152) 136 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:21.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:21 vm08 bash[17774]: cluster 2026-03-09T18:22:20.395240+0000 mgr.y (mgr.14152) 136 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:22.022 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/mon.c/config 2026-03-09T18:22:22.368 INFO:teuthology.orchestra.run.vm00.stdout:pg_num: 1 2026-03-09T18:22:22.439 INFO:tasks.cephadm:Adding prometheus.a on vm08 2026-03-09T18:22:22.439 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph orch apply prometheus '1;vm08=a' 2026-03-09T18:22:22.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:22 vm00 bash[17468]: audit 2026-03-09T18:22:22.365976+0000 mon.c (mon.1) 14 : audit [DBG] from='client.? 192.168.123.100:0/2416208133' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T18:22:22.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:22 vm00 bash[22468]: audit 2026-03-09T18:22:22.365976+0000 mon.c (mon.1) 14 : audit [DBG] from='client.? 192.168.123.100:0/2416208133' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T18:22:22.716 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:22 vm08 bash[17774]: audit 2026-03-09T18:22:22.365976+0000 mon.c (mon.1) 14 : audit [DBG] from='client.? 192.168.123.100:0/2416208133' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T18:22:22.916 INFO:teuthology.orchestra.run.vm08.stdout:Scheduled prometheus update... 2026-03-09T18:22:22.971 DEBUG:teuthology.orchestra.run.vm08:prometheus.a> sudo journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@prometheus.a.service 2026-03-09T18:22:22.972 INFO:tasks.cephadm:Adding node-exporter.a on vm00 2026-03-09T18:22:22.972 INFO:tasks.cephadm:Adding node-exporter.b on vm08 2026-03-09T18:22:22.972 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph orch apply node-exporter '2;vm00=a;vm08=b' 2026-03-09T18:22:23.477 INFO:teuthology.orchestra.run.vm08.stdout:Scheduled node-exporter update... 2026-03-09T18:22:23.536 DEBUG:teuthology.orchestra.run.vm00:node-exporter.a> sudo journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@node-exporter.a.service 2026-03-09T18:22:23.537 DEBUG:teuthology.orchestra.run.vm08:node-exporter.b> sudo journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@node-exporter.b.service 2026-03-09T18:22:23.539 INFO:tasks.cephadm:Adding alertmanager.a on vm00 2026-03-09T18:22:23.539 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph orch apply alertmanager '1;vm00=a' 2026-03-09T18:22:23.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:23 vm08 bash[17774]: cluster 2026-03-09T18:22:22.395565+0000 mgr.y (mgr.14152) 137 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:23.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:23 vm08 bash[17774]: audit 2026-03-09T18:22:22.914076+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:23.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:23 vm08 bash[17774]: audit 2026-03-09T18:22:22.920774+0000 mon.a (mon.0) 537 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:22:23.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:23 vm08 bash[17774]: audit 2026-03-09T18:22:22.921689+0000 mon.a (mon.0) 538 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:22:23.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:23 vm08 bash[17774]: audit 2026-03-09T18:22:22.922190+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:22:23.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:23 vm08 bash[17774]: audit 2026-03-09T18:22:22.927458+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:23.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:23 vm08 bash[17774]: audit 2026-03-09T18:22:22.930669+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T18:22:23.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:23 vm00 bash[22468]: cluster 2026-03-09T18:22:22.395565+0000 mgr.y (mgr.14152) 137 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:23.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:23 vm00 bash[22468]: audit 2026-03-09T18:22:22.914076+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:23.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:23 vm00 bash[22468]: audit 2026-03-09T18:22:22.920774+0000 mon.a (mon.0) 537 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:22:23.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:23 vm00 bash[22468]: audit 2026-03-09T18:22:22.921689+0000 mon.a (mon.0) 538 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:22:23.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:23 vm00 bash[22468]: audit 2026-03-09T18:22:22.922190+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:22:23.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:23 vm00 bash[22468]: audit 2026-03-09T18:22:22.927458+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:23.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:23 vm00 bash[22468]: audit 2026-03-09T18:22:22.930669+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T18:22:23.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:23 vm00 bash[17468]: cluster 2026-03-09T18:22:22.395565+0000 mgr.y (mgr.14152) 137 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:23.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:23 vm00 bash[17468]: audit 2026-03-09T18:22:22.914076+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:23.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:23 vm00 bash[17468]: audit 2026-03-09T18:22:22.920774+0000 mon.a (mon.0) 537 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:22:23.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:23 vm00 bash[17468]: audit 2026-03-09T18:22:22.921689+0000 mon.a (mon.0) 538 : audit [DBG] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:22:23.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:23 vm00 bash[17468]: audit 2026-03-09T18:22:22.922190+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:22:23.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:23 vm00 bash[17468]: audit 2026-03-09T18:22:22.927458+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:23.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:23 vm00 bash[17468]: audit 2026-03-09T18:22:22.930669+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T18:22:24.065 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:23 vm08 bash[18535]: ignoring --setuser ceph since I am not root 2026-03-09T18:22:24.065 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:23 vm08 bash[18535]: ignoring --setgroup ceph since I am not root 2026-03-09T18:22:24.384 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:23 vm00 bash[17744]: ignoring --setuser ceph since I am not root 2026-03-09T18:22:24.384 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:23 vm00 bash[17744]: ignoring --setgroup ceph since I am not root 2026-03-09T18:22:24.384 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:24 vm00 bash[17744]: debug 2026-03-09T18:22:24.053+0000 7f18b024e000 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T18:22:24.384 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:24 vm00 bash[17744]: debug 2026-03-09T18:22:24.101+0000 7f18b024e000 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T18:22:24.443 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:24 vm08 bash[18535]: debug 2026-03-09T18:22:24.057+0000 7f837f724000 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T18:22:24.443 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:24 vm08 bash[18535]: debug 2026-03-09T18:22:24.109+0000 7f837f724000 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T18:22:24.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:24 vm08 bash[17774]: audit 2026-03-09T18:22:22.907695+0000 mgr.y (mgr.14152) 138 : audit [DBG] from='client.24298 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm08=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:22:24.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:24 vm08 bash[17774]: cephadm 2026-03-09T18:22:22.908783+0000 mgr.y (mgr.14152) 139 : cephadm [INF] Saving service prometheus spec with placement vm08=a;count:1 2026-03-09T18:22:24.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:24 vm08 bash[17774]: audit 2026-03-09T18:22:23.468861+0000 mgr.y (mgr.14152) 140 : audit [DBG] from='client.24320 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm00=a;vm08=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:22:24.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:24 vm08 bash[17774]: cephadm 2026-03-09T18:22:23.469744+0000 mgr.y (mgr.14152) 141 : cephadm [INF] Saving service node-exporter spec with placement vm00=a;vm08=b;count:2 2026-03-09T18:22:24.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:24 vm08 bash[17774]: audit 2026-03-09T18:22:23.475416+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:24.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:24 vm08 bash[17774]: audit 2026-03-09T18:22:23.940782+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T18:22:24.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:24 vm08 bash[17774]: cluster 2026-03-09T18:22:23.940842+0000 mon.a (mon.0) 544 : cluster [DBG] mgrmap e16: y(active, since 2m), standbys: x 2026-03-09T18:22:24.725 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:24 vm08 bash[18535]: debug 2026-03-09T18:22:24.437+0000 7f837f724000 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T18:22:24.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:24 vm00 bash[22468]: audit 2026-03-09T18:22:22.907695+0000 mgr.y (mgr.14152) 138 : audit [DBG] from='client.24298 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm08=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:22:24.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:24 vm00 bash[22468]: cephadm 2026-03-09T18:22:22.908783+0000 mgr.y (mgr.14152) 139 : cephadm [INF] Saving service prometheus spec with placement vm08=a;count:1 2026-03-09T18:22:24.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:24 vm00 bash[22468]: audit 2026-03-09T18:22:23.468861+0000 mgr.y (mgr.14152) 140 : audit [DBG] from='client.24320 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm00=a;vm08=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:22:24.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:24 vm00 bash[22468]: cephadm 2026-03-09T18:22:23.469744+0000 mgr.y (mgr.14152) 141 : cephadm [INF] Saving service node-exporter spec with placement vm00=a;vm08=b;count:2 2026-03-09T18:22:24.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:24 vm00 bash[22468]: audit 2026-03-09T18:22:23.475416+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:24.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:24 vm00 bash[22468]: audit 2026-03-09T18:22:23.940782+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T18:22:24.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:24 vm00 bash[22468]: cluster 2026-03-09T18:22:23.940842+0000 mon.a (mon.0) 544 : cluster [DBG] mgrmap e16: y(active, since 2m), standbys: x 2026-03-09T18:22:24.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:24 vm00 bash[17468]: audit 2026-03-09T18:22:22.907695+0000 mgr.y (mgr.14152) 138 : audit [DBG] from='client.24298 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm08=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:22:24.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:24 vm00 bash[17468]: cephadm 2026-03-09T18:22:22.908783+0000 mgr.y (mgr.14152) 139 : cephadm [INF] Saving service prometheus spec with placement vm08=a;count:1 2026-03-09T18:22:24.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:24 vm00 bash[17468]: audit 2026-03-09T18:22:23.468861+0000 mgr.y (mgr.14152) 140 : audit [DBG] from='client.24320 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm00=a;vm08=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:22:24.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:24 vm00 bash[17468]: cephadm 2026-03-09T18:22:23.469744+0000 mgr.y (mgr.14152) 141 : cephadm [INF] Saving service node-exporter spec with placement vm00=a;vm08=b;count:2 2026-03-09T18:22:24.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:24 vm00 bash[17468]: audit 2026-03-09T18:22:23.475416+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' 2026-03-09T18:22:24.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:24 vm00 bash[17468]: audit 2026-03-09T18:22:23.940782+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14152 192.168.123.100:0/4104586833' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T18:22:24.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:24 vm00 bash[17468]: cluster 2026-03-09T18:22:23.940842+0000 mon.a (mon.0) 544 : cluster [DBG] mgrmap e16: y(active, since 2m), standbys: x 2026-03-09T18:22:24.885 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:24 vm00 bash[17744]: debug 2026-03-09T18:22:24.437+0000 7f18b024e000 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T18:22:25.306 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:24 vm08 bash[18535]: debug 2026-03-09T18:22:24.985+0000 7f837f724000 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T18:22:25.306 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:25 vm08 bash[18535]: debug 2026-03-09T18:22:25.081+0000 7f837f724000 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T18:22:25.307 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:24 vm00 bash[17744]: debug 2026-03-09T18:22:24.981+0000 7f18b024e000 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T18:22:25.307 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:25 vm00 bash[17744]: debug 2026-03-09T18:22:25.077+0000 7f18b024e000 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T18:22:25.619 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:25 vm08 bash[18535]: debug 2026-03-09T18:22:25.301+0000 7f837f724000 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T18:22:25.619 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:25 vm08 bash[18535]: debug 2026-03-09T18:22:25.413+0000 7f837f724000 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T18:22:25.619 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:25 vm08 bash[18535]: debug 2026-03-09T18:22:25.469+0000 7f837f724000 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T18:22:25.626 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:25 vm00 bash[17744]: debug 2026-03-09T18:22:25.305+0000 7f18b024e000 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T18:22:25.626 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:25 vm00 bash[17744]: debug 2026-03-09T18:22:25.421+0000 7f18b024e000 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T18:22:25.626 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:25 vm00 bash[17744]: debug 2026-03-09T18:22:25.481+0000 7f18b024e000 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T18:22:25.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:25 vm00 bash[17744]: debug 2026-03-09T18:22:25.621+0000 7f18b024e000 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T18:22:25.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:25 vm00 bash[17744]: debug 2026-03-09T18:22:25.689+0000 7f18b024e000 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T18:22:25.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:25 vm00 bash[17744]: debug 2026-03-09T18:22:25.769+0000 7f18b024e000 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T18:22:25.975 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:25 vm08 bash[18535]: debug 2026-03-09T18:22:25.613+0000 7f837f724000 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T18:22:25.975 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:25 vm08 bash[18535]: debug 2026-03-09T18:22:25.677+0000 7f837f724000 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T18:22:25.975 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:25 vm08 bash[18535]: debug 2026-03-09T18:22:25.753+0000 7f837f724000 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T18:22:26.634 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:26 vm00 bash[17744]: debug 2026-03-09T18:22:26.329+0000 7f18b024e000 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T18:22:26.634 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:26 vm00 bash[17744]: debug 2026-03-09T18:22:26.385+0000 7f18b024e000 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T18:22:26.634 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:26 vm00 bash[17744]: debug 2026-03-09T18:22:26.441+0000 7f18b024e000 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T18:22:26.725 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:26 vm08 bash[18535]: debug 2026-03-09T18:22:26.305+0000 7f837f724000 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T18:22:26.725 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:26 vm08 bash[18535]: debug 2026-03-09T18:22:26.361+0000 7f837f724000 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T18:22:26.725 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:26 vm08 bash[18535]: debug 2026-03-09T18:22:26.425+0000 7f837f724000 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T18:22:27.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:26 vm00 bash[17744]: debug 2026-03-09T18:22:26.785+0000 7f18b024e000 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T18:22:27.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:26 vm00 bash[17744]: debug 2026-03-09T18:22:26.849+0000 7f18b024e000 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T18:22:27.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:26 vm00 bash[17744]: debug 2026-03-09T18:22:26.917+0000 7f18b024e000 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T18:22:27.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:27 vm00 bash[17744]: debug 2026-03-09T18:22:27.013+0000 7f18b024e000 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:22:27.225 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:26 vm08 bash[18535]: debug 2026-03-09T18:22:26.761+0000 7f837f724000 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T18:22:27.225 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:26 vm08 bash[18535]: debug 2026-03-09T18:22:26.829+0000 7f837f724000 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T18:22:27.225 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:26 vm08 bash[18535]: debug 2026-03-09T18:22:26.889+0000 7f837f724000 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T18:22:27.225 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:26 vm08 bash[18535]: debug 2026-03-09T18:22:26.981+0000 7f837f724000 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:22:27.605 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:27 vm00 bash[17744]: debug 2026-03-09T18:22:27.349+0000 7f18b024e000 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T18:22:27.605 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:27 vm00 bash[17744]: debug 2026-03-09T18:22:27.541+0000 7f18b024e000 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T18:22:27.628 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:27 vm08 bash[18535]: debug 2026-03-09T18:22:27.313+0000 7f837f724000 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T18:22:27.628 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:27 vm08 bash[18535]: debug 2026-03-09T18:22:27.501+0000 7f837f724000 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T18:22:27.628 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:27 vm08 bash[18535]: debug 2026-03-09T18:22:27.557+0000 7f837f724000 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T18:22:27.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:27 vm00 bash[17744]: debug 2026-03-09T18:22:27.601+0000 7f18b024e000 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T18:22:27.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:27 vm00 bash[17744]: debug 2026-03-09T18:22:27.665+0000 7f18b024e000 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T18:22:27.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:27 vm00 bash[17744]: debug 2026-03-09T18:22:27.813+0000 7f18b024e000 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:22:27.975 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:27 vm08 bash[18535]: debug 2026-03-09T18:22:27.621+0000 7f837f724000 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T18:22:27.975 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:27 vm08 bash[18535]: debug 2026-03-09T18:22:27.773+0000 7f837f724000 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:22:28.594 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:28 vm08 bash[17774]: audit 2026-03-09T18:22:28.320534+0000 mon.b (mon.2) 25 : audit [DBG] from='mgr.? 192.168.123.108:0/1500305064' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:22:28.594 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:28 vm08 bash[17774]: audit 2026-03-09T18:22:28.321315+0000 mon.b (mon.2) 26 : audit [DBG] from='mgr.? 192.168.123.108:0/1500305064' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:22:28.594 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:28 vm08 bash[17774]: audit 2026-03-09T18:22:28.323961+0000 mon.b (mon.2) 27 : audit [DBG] from='mgr.? 192.168.123.108:0/1500305064' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:22:28.594 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:28 vm08 bash[17774]: audit 2026-03-09T18:22:28.324593+0000 mon.b (mon.2) 28 : audit [DBG] from='mgr.? 192.168.123.108:0/1500305064' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:22:28.594 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:28 vm08 bash[17774]: cluster 2026-03-09T18:22:28.327702+0000 mon.a (mon.0) 545 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T18:22:28.594 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:28 vm08 bash[17774]: cluster 2026-03-09T18:22:28.327800+0000 mon.a (mon.0) 546 : cluster [DBG] Standby manager daemon x started 2026-03-09T18:22:28.594 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:28 vm08 bash[17774]: cluster 2026-03-09T18:22:28.355140+0000 mon.a (mon.0) 547 : cluster [INF] Active manager daemon y restarted 2026-03-09T18:22:28.594 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:28 vm08 bash[17774]: cluster 2026-03-09T18:22:28.356064+0000 mon.a (mon.0) 548 : cluster [INF] Activating manager daemon y 2026-03-09T18:22:28.594 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:28 vm08 bash[17774]: cluster 2026-03-09T18:22:28.368539+0000 mon.a (mon.0) 549 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T18:22:28.594 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:28 vm08 bash[18535]: debug 2026-03-09T18:22:28.313+0000 7f837f724000 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T18:22:28.594 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:28 vm08 bash[18535]: [09/Mar/2026:18:22:28] ENGINE Bus STARTING 2026-03-09T18:22:28.594 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:28 vm08 bash[18535]: CherryPy Checker: 2026-03-09T18:22:28.594 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:28 vm08 bash[18535]: The Application mounted at '' has an empty config. 2026-03-09T18:22:28.594 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:28 vm08 bash[18535]: [09/Mar/2026:18:22:28] ENGINE Serving on http://:::9283 2026-03-09T18:22:28.594 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:28 vm08 bash[18535]: [09/Mar/2026:18:22:28] ENGINE Bus STARTED 2026-03-09T18:22:28.643 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:28 vm00 bash[17468]: audit 2026-03-09T18:22:28.320534+0000 mon.b (mon.2) 25 : audit [DBG] from='mgr.? 192.168.123.108:0/1500305064' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:22:28.643 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:28 vm00 bash[17468]: audit 2026-03-09T18:22:28.321315+0000 mon.b (mon.2) 26 : audit [DBG] from='mgr.? 192.168.123.108:0/1500305064' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:22:28.643 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:28 vm00 bash[17468]: audit 2026-03-09T18:22:28.323961+0000 mon.b (mon.2) 27 : audit [DBG] from='mgr.? 192.168.123.108:0/1500305064' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:22:28.643 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:28 vm00 bash[17468]: audit 2026-03-09T18:22:28.324593+0000 mon.b (mon.2) 28 : audit [DBG] from='mgr.? 192.168.123.108:0/1500305064' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:22:28.643 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:28 vm00 bash[17468]: cluster 2026-03-09T18:22:28.327702+0000 mon.a (mon.0) 545 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T18:22:28.643 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:28 vm00 bash[17468]: cluster 2026-03-09T18:22:28.327800+0000 mon.a (mon.0) 546 : cluster [DBG] Standby manager daemon x started 2026-03-09T18:22:28.643 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:28 vm00 bash[17468]: cluster 2026-03-09T18:22:28.355140+0000 mon.a (mon.0) 547 : cluster [INF] Active manager daemon y restarted 2026-03-09T18:22:28.643 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:28 vm00 bash[17468]: cluster 2026-03-09T18:22:28.356064+0000 mon.a (mon.0) 548 : cluster [INF] Activating manager daemon y 2026-03-09T18:22:28.643 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:28 vm00 bash[17468]: cluster 2026-03-09T18:22:28.368539+0000 mon.a (mon.0) 549 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T18:22:28.643 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:28 vm00 bash[17744]: debug 2026-03-09T18:22:28.349+0000 7f18b024e000 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T18:22:28.643 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:28 vm00 bash[17744]: [09/Mar/2026:18:22:28] ENGINE Bus STARTING 2026-03-09T18:22:28.644 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:28 vm00 bash[22468]: audit 2026-03-09T18:22:28.320534+0000 mon.b (mon.2) 25 : audit [DBG] from='mgr.? 192.168.123.108:0/1500305064' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:22:28.644 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:28 vm00 bash[22468]: audit 2026-03-09T18:22:28.321315+0000 mon.b (mon.2) 26 : audit [DBG] from='mgr.? 192.168.123.108:0/1500305064' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:22:28.644 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:28 vm00 bash[22468]: audit 2026-03-09T18:22:28.323961+0000 mon.b (mon.2) 27 : audit [DBG] from='mgr.? 192.168.123.108:0/1500305064' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:22:28.644 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:28 vm00 bash[22468]: audit 2026-03-09T18:22:28.324593+0000 mon.b (mon.2) 28 : audit [DBG] from='mgr.? 192.168.123.108:0/1500305064' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:22:28.644 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:28 vm00 bash[22468]: cluster 2026-03-09T18:22:28.327702+0000 mon.a (mon.0) 545 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T18:22:28.644 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:28 vm00 bash[22468]: cluster 2026-03-09T18:22:28.327800+0000 mon.a (mon.0) 546 : cluster [DBG] Standby manager daemon x started 2026-03-09T18:22:28.644 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:28 vm00 bash[22468]: cluster 2026-03-09T18:22:28.355140+0000 mon.a (mon.0) 547 : cluster [INF] Active manager daemon y restarted 2026-03-09T18:22:28.644 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:28 vm00 bash[22468]: cluster 2026-03-09T18:22:28.356064+0000 mon.a (mon.0) 548 : cluster [INF] Activating manager daemon y 2026-03-09T18:22:28.644 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:28 vm00 bash[22468]: cluster 2026-03-09T18:22:28.368539+0000 mon.a (mon.0) 549 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T18:22:29.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:28 vm00 bash[17744]: CherryPy Checker: 2026-03-09T18:22:29.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:28 vm00 bash[17744]: The Application mounted at '' has an empty config. 2026-03-09T18:22:29.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:28 vm00 bash[17744]: [09/Mar/2026:18:22:28] ENGINE Serving on http://:::9283 2026-03-09T18:22:29.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:28 vm00 bash[17744]: [09/Mar/2026:18:22:28] ENGINE Bus STARTED 2026-03-09T18:22:29.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:28 vm00 bash[17744]: [09/Mar/2026:18:22:28] ENGINE Bus STARTING 2026-03-09T18:22:29.135 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:28 vm00 bash[17744]: [09/Mar/2026:18:22:28] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T18:22:29.135 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:28 vm00 bash[17744]: [09/Mar/2026:18:22:28] ENGINE Bus STARTED 2026-03-09T18:22:29.459 INFO:teuthology.orchestra.run.vm08.stdout:Scheduled alertmanager update... 2026-03-09T18:22:29.523 DEBUG:teuthology.orchestra.run.vm00:alertmanager.a> sudo journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@alertmanager.a.service 2026-03-09T18:22:29.524 INFO:tasks.cephadm:Adding grafana.a on vm08 2026-03-09T18:22:29.524 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph orch apply grafana '1;vm08=a' 2026-03-09T18:22:29.685 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: cluster 2026-03-09T18:22:28.408717+0000 mon.a (mon.0) 550 : cluster [DBG] mgrmap e17: y(active, starting, since 0.0528046s), standbys: x 2026-03-09T18:22:29.685 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: audit 2026-03-09T18:22:28.422270+0000 mon.c (mon.1) 15 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:22:29.685 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: audit 2026-03-09T18:22:28.422968+0000 mon.c (mon.1) 16 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:22:29.685 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: audit 2026-03-09T18:22:28.424237+0000 mon.c (mon.1) 17 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:22:29.685 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: audit 2026-03-09T18:22:28.424567+0000 mon.c (mon.1) 18 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:22:29.685 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: audit 2026-03-09T18:22:28.424910+0000 mon.c (mon.1) 19 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:22:29.685 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: audit 2026-03-09T18:22:28.425251+0000 mon.c (mon.1) 20 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:22:29.685 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: audit 2026-03-09T18:22:28.426209+0000 mon.c (mon.1) 21 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:22:29.685 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: audit 2026-03-09T18:22:28.426738+0000 mon.c (mon.1) 22 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:22:29.685 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: audit 2026-03-09T18:22:28.427237+0000 mon.c (mon.1) 23 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:22:29.685 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: audit 2026-03-09T18:22:28.427731+0000 mon.c (mon.1) 24 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:22:29.685 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: audit 2026-03-09T18:22:28.428214+0000 mon.c (mon.1) 25 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:22:29.685 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: audit 2026-03-09T18:22:28.428698+0000 mon.c (mon.1) 26 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:22:29.685 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: audit 2026-03-09T18:22:28.429180+0000 mon.c (mon.1) 27 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:22:29.685 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: audit 2026-03-09T18:22:28.429791+0000 mon.c (mon.1) 28 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:22:29.685 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: audit 2026-03-09T18:22:28.430275+0000 mon.c (mon.1) 29 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:22:29.685 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: audit 2026-03-09T18:22:28.431020+0000 mon.c (mon.1) 30 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:22:29.685 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: cluster 2026-03-09T18:22:28.440897+0000 mon.a (mon.0) 551 : cluster [INF] Manager daemon y is now available 2026-03-09T18:22:29.685 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: audit 2026-03-09T18:22:28.460691+0000 mon.c (mon.1) 31 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:22:29.685 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: audit 2026-03-09T18:22:28.463989+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:29.685 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: audit 2026-03-09T18:22:28.478721+0000 mon.c (mon.1) 32 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:22:29.685 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: audit 2026-03-09T18:22:28.485887+0000 mon.c (mon.1) 33 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:22:29.685 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: audit 2026-03-09T18:22:28.487065+0000 mon.c (mon.1) 34 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:22:29.686 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: audit 2026-03-09T18:22:28.487468+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:22:29.686 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: audit 2026-03-09T18:22:28.488188+0000 mon.c (mon.1) 35 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:22:29.686 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: audit 2026-03-09T18:22:28.514561+0000 mon.c (mon.1) 36 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:22:29.686 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: audit 2026-03-09T18:22:28.515135+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:22:29.686 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: cephadm 2026-03-09T18:22:28.753962+0000 mgr.y (mgr.24335) 1 : cephadm [INF] [09/Mar/2026:18:22:28] ENGINE Bus STARTING 2026-03-09T18:22:29.686 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: cephadm 2026-03-09T18:22:28.873396+0000 mgr.y (mgr.24335) 2 : cephadm [INF] [09/Mar/2026:18:22:28] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T18:22:29.686 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: cephadm 2026-03-09T18:22:28.874719+0000 mgr.y (mgr.24335) 3 : cephadm [INF] [09/Mar/2026:18:22:28] ENGINE Bus STARTED 2026-03-09T18:22:29.686 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:29 vm08 bash[17774]: audit 2026-03-09T18:22:28.882724+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: cluster 2026-03-09T18:22:28.408717+0000 mon.a (mon.0) 550 : cluster [DBG] mgrmap e17: y(active, starting, since 0.0528046s), standbys: x 2026-03-09T18:22:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: audit 2026-03-09T18:22:28.422270+0000 mon.c (mon.1) 15 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:22:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: audit 2026-03-09T18:22:28.422968+0000 mon.c (mon.1) 16 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:22:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: audit 2026-03-09T18:22:28.424237+0000 mon.c (mon.1) 17 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:22:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: audit 2026-03-09T18:22:28.424567+0000 mon.c (mon.1) 18 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:22:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: audit 2026-03-09T18:22:28.424910+0000 mon.c (mon.1) 19 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:22:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: audit 2026-03-09T18:22:28.425251+0000 mon.c (mon.1) 20 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:22:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: audit 2026-03-09T18:22:28.426209+0000 mon.c (mon.1) 21 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:22:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: audit 2026-03-09T18:22:28.426738+0000 mon.c (mon.1) 22 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:22:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: audit 2026-03-09T18:22:28.427237+0000 mon.c (mon.1) 23 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: audit 2026-03-09T18:22:28.427731+0000 mon.c (mon.1) 24 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: audit 2026-03-09T18:22:28.428214+0000 mon.c (mon.1) 25 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: audit 2026-03-09T18:22:28.428698+0000 mon.c (mon.1) 26 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: audit 2026-03-09T18:22:28.429180+0000 mon.c (mon.1) 27 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: audit 2026-03-09T18:22:28.429791+0000 mon.c (mon.1) 28 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: audit 2026-03-09T18:22:28.430275+0000 mon.c (mon.1) 29 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: audit 2026-03-09T18:22:28.431020+0000 mon.c (mon.1) 30 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: cluster 2026-03-09T18:22:28.440897+0000 mon.a (mon.0) 551 : cluster [INF] Manager daemon y is now available 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: audit 2026-03-09T18:22:28.460691+0000 mon.c (mon.1) 31 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: audit 2026-03-09T18:22:28.463989+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: audit 2026-03-09T18:22:28.478721+0000 mon.c (mon.1) 32 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: audit 2026-03-09T18:22:28.485887+0000 mon.c (mon.1) 33 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: audit 2026-03-09T18:22:28.487065+0000 mon.c (mon.1) 34 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: audit 2026-03-09T18:22:28.487468+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: audit 2026-03-09T18:22:28.488188+0000 mon.c (mon.1) 35 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: audit 2026-03-09T18:22:28.514561+0000 mon.c (mon.1) 36 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: audit 2026-03-09T18:22:28.515135+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: cephadm 2026-03-09T18:22:28.753962+0000 mgr.y (mgr.24335) 1 : cephadm [INF] [09/Mar/2026:18:22:28] ENGINE Bus STARTING 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: cephadm 2026-03-09T18:22:28.873396+0000 mgr.y (mgr.24335) 2 : cephadm [INF] [09/Mar/2026:18:22:28] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: cephadm 2026-03-09T18:22:28.874719+0000 mgr.y (mgr.24335) 3 : cephadm [INF] [09/Mar/2026:18:22:28] ENGINE Bus STARTED 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:29 vm00 bash[22468]: audit 2026-03-09T18:22:28.882724+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: cluster 2026-03-09T18:22:28.408717+0000 mon.a (mon.0) 550 : cluster [DBG] mgrmap e17: y(active, starting, since 0.0528046s), standbys: x 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: audit 2026-03-09T18:22:28.422270+0000 mon.c (mon.1) 15 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: audit 2026-03-09T18:22:28.422968+0000 mon.c (mon.1) 16 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: audit 2026-03-09T18:22:28.424237+0000 mon.c (mon.1) 17 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: audit 2026-03-09T18:22:28.424567+0000 mon.c (mon.1) 18 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: audit 2026-03-09T18:22:28.424910+0000 mon.c (mon.1) 19 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: audit 2026-03-09T18:22:28.425251+0000 mon.c (mon.1) 20 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: audit 2026-03-09T18:22:28.426209+0000 mon.c (mon.1) 21 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: audit 2026-03-09T18:22:28.426738+0000 mon.c (mon.1) 22 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: audit 2026-03-09T18:22:28.427237+0000 mon.c (mon.1) 23 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: audit 2026-03-09T18:22:28.427731+0000 mon.c (mon.1) 24 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: audit 2026-03-09T18:22:28.428214+0000 mon.c (mon.1) 25 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: audit 2026-03-09T18:22:28.428698+0000 mon.c (mon.1) 26 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: audit 2026-03-09T18:22:28.429180+0000 mon.c (mon.1) 27 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: audit 2026-03-09T18:22:28.429791+0000 mon.c (mon.1) 28 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: audit 2026-03-09T18:22:28.430275+0000 mon.c (mon.1) 29 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: audit 2026-03-09T18:22:28.431020+0000 mon.c (mon.1) 30 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: cluster 2026-03-09T18:22:28.440897+0000 mon.a (mon.0) 551 : cluster [INF] Manager daemon y is now available 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: audit 2026-03-09T18:22:28.460691+0000 mon.c (mon.1) 31 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: audit 2026-03-09T18:22:28.463989+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: audit 2026-03-09T18:22:28.478721+0000 mon.c (mon.1) 32 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: audit 2026-03-09T18:22:28.485887+0000 mon.c (mon.1) 33 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: audit 2026-03-09T18:22:28.487065+0000 mon.c (mon.1) 34 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: audit 2026-03-09T18:22:28.487468+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: audit 2026-03-09T18:22:28.488188+0000 mon.c (mon.1) 35 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: audit 2026-03-09T18:22:28.514561+0000 mon.c (mon.1) 36 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: audit 2026-03-09T18:22:28.515135+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: cephadm 2026-03-09T18:22:28.753962+0000 mgr.y (mgr.24335) 1 : cephadm [INF] [09/Mar/2026:18:22:28] ENGINE Bus STARTING 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: cephadm 2026-03-09T18:22:28.873396+0000 mgr.y (mgr.24335) 2 : cephadm [INF] [09/Mar/2026:18:22:28] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: cephadm 2026-03-09T18:22:28.874719+0000 mgr.y (mgr.24335) 3 : cephadm [INF] [09/Mar/2026:18:22:28] ENGINE Bus STARTED 2026-03-09T18:22:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:29 vm00 bash[17468]: audit 2026-03-09T18:22:28.882724+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:29.968 INFO:teuthology.orchestra.run.vm08.stdout:Scheduled grafana update... 2026-03-09T18:22:30.020 DEBUG:teuthology.orchestra.run.vm08:grafana.a> sudo journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@grafana.a.service 2026-03-09T18:22:30.021 INFO:tasks.cephadm:Setting up client nodes... 2026-03-09T18:22:30.021 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-09T18:22:30.500 INFO:teuthology.orchestra.run.vm00.stdout:[client.0] 2026-03-09T18:22:30.501 INFO:teuthology.orchestra.run.vm00.stdout: key = AQDmD69pVWeBHRAAy9S3dPHfeOHUfUPuhFw3KA== 2026-03-09T18:22:30.555 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-09T18:22:30.555 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-09T18:22:30.556 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-09T18:22:30.567 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-09T18:22:30.705 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:30 vm08 bash[17774]: cluster 2026-03-09T18:22:29.426307+0000 mon.a (mon.0) 556 : cluster [DBG] mgrmap e18: y(active, since 1.07039s), standbys: x 2026-03-09T18:22:30.705 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:30 vm08 bash[17774]: audit 2026-03-09T18:22:29.429360+0000 mgr.y (mgr.24335) 4 : audit [DBG] from='client.24326 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm00=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:22:30.705 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:30 vm08 bash[17774]: cephadm 2026-03-09T18:22:29.431386+0000 mgr.y (mgr.24335) 5 : cephadm [INF] Saving service alertmanager spec with placement vm00=a;count:1 2026-03-09T18:22:30.705 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:30 vm08 bash[17774]: cluster 2026-03-09T18:22:29.443640+0000 mgr.y (mgr.24335) 6 : cluster [DBG] pgmap v3: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:30.705 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:30 vm08 bash[17774]: audit 2026-03-09T18:22:29.450644+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:30.705 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:30 vm08 bash[17774]: audit 2026-03-09T18:22:29.960749+0000 mgr.y (mgr.24335) 7 : audit [DBG] from='client.24353 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm08=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:22:30.705 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:30 vm08 bash[17774]: cephadm 2026-03-09T18:22:29.961751+0000 mgr.y (mgr.24335) 8 : cephadm [INF] Saving service grafana spec with placement vm08=a;count:1 2026-03-09T18:22:30.705 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:30 vm08 bash[17774]: audit 2026-03-09T18:22:29.966182+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:30.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:30 vm00 bash[22468]: cluster 2026-03-09T18:22:29.426307+0000 mon.a (mon.0) 556 : cluster [DBG] mgrmap e18: y(active, since 1.07039s), standbys: x 2026-03-09T18:22:30.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:30 vm00 bash[22468]: audit 2026-03-09T18:22:29.429360+0000 mgr.y (mgr.24335) 4 : audit [DBG] from='client.24326 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm00=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:22:30.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:30 vm00 bash[22468]: cephadm 2026-03-09T18:22:29.431386+0000 mgr.y (mgr.24335) 5 : cephadm [INF] Saving service alertmanager spec with placement vm00=a;count:1 2026-03-09T18:22:30.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:30 vm00 bash[22468]: cluster 2026-03-09T18:22:29.443640+0000 mgr.y (mgr.24335) 6 : cluster [DBG] pgmap v3: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:30.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:30 vm00 bash[22468]: audit 2026-03-09T18:22:29.450644+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:30.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:30 vm00 bash[22468]: audit 2026-03-09T18:22:29.960749+0000 mgr.y (mgr.24335) 7 : audit [DBG] from='client.24353 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm08=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:22:30.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:30 vm00 bash[22468]: cephadm 2026-03-09T18:22:29.961751+0000 mgr.y (mgr.24335) 8 : cephadm [INF] Saving service grafana spec with placement vm08=a;count:1 2026-03-09T18:22:30.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:30 vm00 bash[22468]: audit 2026-03-09T18:22:29.966182+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:30.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:30 vm00 bash[17468]: cluster 2026-03-09T18:22:29.426307+0000 mon.a (mon.0) 556 : cluster [DBG] mgrmap e18: y(active, since 1.07039s), standbys: x 2026-03-09T18:22:30.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:30 vm00 bash[17468]: audit 2026-03-09T18:22:29.429360+0000 mgr.y (mgr.24335) 4 : audit [DBG] from='client.24326 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm00=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:22:30.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:30 vm00 bash[17468]: cephadm 2026-03-09T18:22:29.431386+0000 mgr.y (mgr.24335) 5 : cephadm [INF] Saving service alertmanager spec with placement vm00=a;count:1 2026-03-09T18:22:30.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:30 vm00 bash[17468]: cluster 2026-03-09T18:22:29.443640+0000 mgr.y (mgr.24335) 6 : cluster [DBG] pgmap v3: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:30.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:30 vm00 bash[17468]: audit 2026-03-09T18:22:29.450644+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:30.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:30 vm00 bash[17468]: audit 2026-03-09T18:22:29.960749+0000 mgr.y (mgr.24335) 7 : audit [DBG] from='client.24353 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm08=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:22:30.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:30 vm00 bash[17468]: cephadm 2026-03-09T18:22:29.961751+0000 mgr.y (mgr.24335) 8 : cephadm [INF] Saving service grafana spec with placement vm08=a;count:1 2026-03-09T18:22:30.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:30 vm00 bash[17468]: audit 2026-03-09T18:22:29.966182+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:31.129 INFO:teuthology.orchestra.run.vm08.stdout:[client.1] 2026-03-09T18:22:31.130 INFO:teuthology.orchestra.run.vm08.stdout: key = AQDnD69pdohHBxAAT8ySkZXu+7XweET7HhuP9A== 2026-03-09T18:22:31.185 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-09T18:22:31.185 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-09T18:22:31.185 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-09T18:22:31.197 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-09T18:22:31.197 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-09T18:22:31.197 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph mgr dump --format=json 2026-03-09T18:22:31.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:31 vm00 bash[17468]: cluster 2026-03-09T18:22:30.424014+0000 mgr.y (mgr.24335) 9 : cluster [DBG] pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:31.458 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:31 vm08 bash[17774]: cluster 2026-03-09T18:22:30.424014+0000 mgr.y (mgr.24335) 9 : cluster [DBG] pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:31.458 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:31 vm08 bash[17774]: audit 2026-03-09T18:22:30.494867+0000 mon.a (mon.0) 559 : audit [INF] from='client.? 192.168.123.100:0/4121419410' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:22:31.458 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:31 vm08 bash[17774]: audit 2026-03-09T18:22:30.498243+0000 mon.a (mon.0) 560 : audit [INF] from='client.? 192.168.123.100:0/4121419410' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T18:22:31.458 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:31 vm08 bash[17774]: cluster 2026-03-09T18:22:30.972845+0000 mon.a (mon.0) 561 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-09T18:22:31.458 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:31 vm08 bash[17774]: audit 2026-03-09T18:22:31.121560+0000 mon.c (mon.1) 37 : audit [INF] from='client.? 192.168.123.108:0/3125089424' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:22:31.458 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:31 vm08 bash[17774]: audit 2026-03-09T18:22:31.122040+0000 mon.a (mon.0) 562 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:22:31.458 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:31 vm08 bash[17774]: audit 2026-03-09T18:22:31.126928+0000 mon.a (mon.0) 563 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T18:22:31.742 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:31 vm00 bash[17468]: audit 2026-03-09T18:22:30.494867+0000 mon.a (mon.0) 559 : audit [INF] from='client.? 192.168.123.100:0/4121419410' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:22:31.742 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:31 vm00 bash[17468]: audit 2026-03-09T18:22:30.498243+0000 mon.a (mon.0) 560 : audit [INF] from='client.? 192.168.123.100:0/4121419410' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T18:22:31.742 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:31 vm00 bash[17468]: cluster 2026-03-09T18:22:30.972845+0000 mon.a (mon.0) 561 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-09T18:22:31.742 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:31 vm00 bash[17468]: audit 2026-03-09T18:22:31.121560+0000 mon.c (mon.1) 37 : audit [INF] from='client.? 192.168.123.108:0/3125089424' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:22:31.742 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:31 vm00 bash[17468]: audit 2026-03-09T18:22:31.122040+0000 mon.a (mon.0) 562 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:22:31.742 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:31 vm00 bash[17468]: audit 2026-03-09T18:22:31.126928+0000 mon.a (mon.0) 563 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T18:22:31.742 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:31 vm00 bash[22468]: cluster 2026-03-09T18:22:30.424014+0000 mgr.y (mgr.24335) 9 : cluster [DBG] pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:31.742 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:31 vm00 bash[22468]: audit 2026-03-09T18:22:30.494867+0000 mon.a (mon.0) 559 : audit [INF] from='client.? 192.168.123.100:0/4121419410' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:22:31.742 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:31 vm00 bash[22468]: audit 2026-03-09T18:22:30.498243+0000 mon.a (mon.0) 560 : audit [INF] from='client.? 192.168.123.100:0/4121419410' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T18:22:31.742 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:31 vm00 bash[22468]: cluster 2026-03-09T18:22:30.972845+0000 mon.a (mon.0) 561 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-09T18:22:31.742 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:31 vm00 bash[22468]: audit 2026-03-09T18:22:31.121560+0000 mon.c (mon.1) 37 : audit [INF] from='client.? 192.168.123.108:0/3125089424' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:22:31.742 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:31 vm00 bash[22468]: audit 2026-03-09T18:22:31.122040+0000 mon.a (mon.0) 562 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:22:31.742 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:31 vm00 bash[22468]: audit 2026-03-09T18:22:31.126928+0000 mon.a (mon.0) 563 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T18:22:32.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:32 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:32.633 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:32 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:32.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:32 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:32.633 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:22:32 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:32.633 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:22:32 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:32.633 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:22:32 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:32.633 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:22:32 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:32.633 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:32 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:32.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:32 vm00 bash[17468]: audit 2026-03-09T18:22:31.749845+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:32.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:32 vm00 bash[17468]: audit 2026-03-09T18:22:31.897110+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:32.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:32 vm00 bash[17468]: audit 2026-03-09T18:22:32.050059+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:32.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:32 vm00 bash[17468]: audit 2026-03-09T18:22:32.051738+0000 mon.c (mon.1) 38 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:32.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:32 vm00 bash[17468]: audit 2026-03-09T18:22:32.052084+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:32.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:32 vm00 bash[17468]: audit 2026-03-09T18:22:32.166002+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:32.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:32 vm00 bash[17468]: audit 2026-03-09T18:22:32.182060+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:32.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:32 vm00 bash[17468]: audit 2026-03-09T18:22:32.183145+0000 mon.c (mon.1) 39 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:32.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:32 vm00 bash[17468]: audit 2026-03-09T18:22:32.183335+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:32.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:32 vm00 bash[17468]: audit 2026-03-09T18:22:32.184028+0000 mon.c (mon.1) 40 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:32.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:32 vm00 bash[17468]: audit 2026-03-09T18:22:32.184208+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:32.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:32 vm00 bash[17468]: audit 2026-03-09T18:22:32.184819+0000 mon.c (mon.1) 41 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:32.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:32 vm00 bash[17468]: audit 2026-03-09T18:22:32.184975+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:32.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:32 vm00 bash[17468]: audit 2026-03-09T18:22:32.185498+0000 mon.c (mon.1) 42 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:32.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:32 vm00 bash[17468]: audit 2026-03-09T18:22:32.185667+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:32.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:32 vm00 bash[17468]: audit 2026-03-09T18:22:32.312923+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:32.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:32 vm00 bash[17468]: audit 2026-03-09T18:22:32.318776+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:32.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:32 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:32.885 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:32 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:32.886 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:32 vm00 bash[22468]: audit 2026-03-09T18:22:31.749845+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:32.886 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:32 vm00 bash[22468]: audit 2026-03-09T18:22:31.897110+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:32.886 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:32 vm00 bash[22468]: audit 2026-03-09T18:22:32.050059+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:32.886 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:32 vm00 bash[22468]: audit 2026-03-09T18:22:32.051738+0000 mon.c (mon.1) 38 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:32.886 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:32 vm00 bash[22468]: audit 2026-03-09T18:22:32.052084+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:32.886 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:32 vm00 bash[22468]: audit 2026-03-09T18:22:32.166002+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:32.886 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:32 vm00 bash[22468]: audit 2026-03-09T18:22:32.182060+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:32.886 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:32 vm00 bash[22468]: audit 2026-03-09T18:22:32.183145+0000 mon.c (mon.1) 39 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:32.886 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:32 vm00 bash[22468]: audit 2026-03-09T18:22:32.183335+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:32.886 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:32 vm00 bash[22468]: audit 2026-03-09T18:22:32.184028+0000 mon.c (mon.1) 40 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:32.886 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:32 vm00 bash[22468]: audit 2026-03-09T18:22:32.184208+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:32.886 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:32 vm00 bash[22468]: audit 2026-03-09T18:22:32.184819+0000 mon.c (mon.1) 41 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:32.886 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:32 vm00 bash[22468]: audit 2026-03-09T18:22:32.184975+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:32.886 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:32 vm00 bash[22468]: audit 2026-03-09T18:22:32.185498+0000 mon.c (mon.1) 42 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:32.886 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:32 vm00 bash[22468]: audit 2026-03-09T18:22:32.185667+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:32.886 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:32 vm00 bash[22468]: audit 2026-03-09T18:22:32.312923+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:32.886 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:32 vm00 bash[22468]: audit 2026-03-09T18:22:32.318776+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:32.886 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:32 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:32.886 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:22:32 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:32.886 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:22:32 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:32.886 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:22:32 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:32.886 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:22:32 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:32.886 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:32 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:32.886 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:32 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:32.886 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:32 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:32.886 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:32 vm00 systemd[1]: Started Ceph node-exporter.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:22:33.021 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:32 vm08 bash[17774]: audit 2026-03-09T18:22:31.749845+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:33.021 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:32 vm08 bash[17774]: audit 2026-03-09T18:22:31.897110+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:33.021 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:32 vm08 bash[17774]: audit 2026-03-09T18:22:32.050059+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:33.021 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:32 vm08 bash[17774]: audit 2026-03-09T18:22:32.051738+0000 mon.c (mon.1) 38 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:33.021 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:32 vm08 bash[17774]: audit 2026-03-09T18:22:32.052084+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:33.021 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:32 vm08 bash[17774]: audit 2026-03-09T18:22:32.166002+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:33.021 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:32 vm08 bash[17774]: audit 2026-03-09T18:22:32.182060+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:33.021 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:32 vm08 bash[17774]: audit 2026-03-09T18:22:32.183145+0000 mon.c (mon.1) 39 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:33.021 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:32 vm08 bash[17774]: audit 2026-03-09T18:22:32.183335+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:33.021 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:32 vm08 bash[17774]: audit 2026-03-09T18:22:32.184028+0000 mon.c (mon.1) 40 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:33.021 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:32 vm08 bash[17774]: audit 2026-03-09T18:22:32.184208+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:33.021 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:32 vm08 bash[17774]: audit 2026-03-09T18:22:32.184819+0000 mon.c (mon.1) 41 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:33.021 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:32 vm08 bash[17774]: audit 2026-03-09T18:22:32.184975+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:33.021 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:32 vm08 bash[17774]: audit 2026-03-09T18:22:32.185498+0000 mon.c (mon.1) 42 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:33.021 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:32 vm08 bash[17774]: audit 2026-03-09T18:22:32.185667+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:22:33.021 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:32 vm08 bash[17774]: audit 2026-03-09T18:22:32.312923+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:33.021 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:32 vm08 bash[17774]: audit 2026-03-09T18:22:32.318776+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:33.021 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:33 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:33.355 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:33 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:33.355 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:33 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:33.355 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:22:33 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:33.355 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:22:33 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:33.355 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:22:33 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:33.355 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:22:33 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:33.355 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:33 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:33.355 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:33 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:33.355 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:33 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:33.355 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:33 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:33.384 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:32 vm00 bash[37273]: Unable to find image 'quay.io/prometheus/node-exporter:v1.3.1' locally 2026-03-09T18:22:33.528 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/mon.c/config 2026-03-09T18:22:33.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:33 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:33.725 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:33 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:33.726 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:22:33 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:33.726 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:22:33 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:33.726 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:22:33 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:33.726 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:22:33 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:33.726 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:22:33 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:33.726 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:33 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:33.726 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:33 vm08 systemd[1]: Started Ceph node-exporter.b for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:22:33.726 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:33 vm08 bash[32685]: Unable to find image 'quay.io/prometheus/node-exporter:v1.3.1' locally 2026-03-09T18:22:33.936 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:22:33.988 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":19,"active_gid":24335,"active_name":"y","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6800","nonce":2237172914},{"type":"v1","addr":"192.168.123.100:6801","nonce":2237172914}]},"active_addr":"192.168.123.100:6801/2237172914","active_change":"2026-03-09T18:22:28.355895+0000","active_mgr_features":4540138303579357183,"available":true,"standbys":[{"gid":24338,"name":"x","mgr_features":4540138303579357183,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"7","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 or 7 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2400","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"7","min":"0","max":"7","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 or 7 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","upmap"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.23.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/ceph-grafana:8.3.5","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"docker.io/library/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"docker.io/arcts/keepalived","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.3.1","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.33.4","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"docker.io/maxwo/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"docker.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"noautoscale":{"name":"noautoscale","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"global autoscale flag","long_desc":"Option to turn on/off the autoscaler for all pools","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"drive_group_interval":{"name":"drive_group_interval","type":"float","level":"advanced","flags":0,"default_value":"300.0","min":"","max":"","enum_allowed":[],"desc":"interval in seconds between re-application of applied drive_groups","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","prometheus","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"7","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 or 7 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2400","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"7","min":"0","max":"7","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 or 7 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","upmap"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.23.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/ceph-grafana:8.3.5","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"docker.io/library/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"docker.io/arcts/keepalived","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.3.1","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.33.4","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"docker.io/maxwo/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"docker.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"noautoscale":{"name":"noautoscale","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"global autoscale flag","long_desc":"Option to turn on/off the autoscaler for all pools","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"drive_group_interval":{"name":"drive_group_interval","type":"float","level":"advanced","flags":0,"default_value":"300.0","min":"","max":"","enum_allowed":[],"desc":"interval in seconds between re-application of applied drive_groups","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.100:8443/","prometheus":"http://192.168.123.100:9283/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"last_failure_osd_epoch":51,"active_clients":[{"addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":1003484560}]},{"addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":2989229681}]},{"addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":2239094263}]},{"addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":3488844935}]}]}} 2026-03-09T18:22:33.990 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-09T18:22:33.990 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-09T18:22:33.990 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph osd dump --format=json 2026-03-09T18:22:34.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:33 vm08 bash[17774]: cephadm 2026-03-09T18:22:32.052877+0000 mgr.y (mgr.24335) 10 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:22:34.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:33 vm08 bash[17774]: cephadm 2026-03-09T18:22:32.107144+0000 mgr.y (mgr.24335) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:22:34.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:33 vm08 bash[17774]: cephadm 2026-03-09T18:22:32.186118+0000 mgr.y (mgr.24335) 12 : cephadm [INF] Adjusting osd_memory_target on vm08 to 113.9M 2026-03-09T18:22:34.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:33 vm08 bash[17774]: cephadm 2026-03-09T18:22:32.186735+0000 mgr.y (mgr.24335) 13 : cephadm [WRN] Unable to set osd_memory_target on vm08 to 119478988: error parsing value: Value '119478988' is below minimum 939524096 2026-03-09T18:22:34.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:33 vm08 bash[17774]: cephadm 2026-03-09T18:22:32.186794+0000 mgr.y (mgr.24335) 14 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T18:22:34.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:33 vm08 bash[17774]: cephadm 2026-03-09T18:22:32.246430+0000 mgr.y (mgr.24335) 15 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:22:34.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:33 vm08 bash[17774]: cephadm 2026-03-09T18:22:32.321710+0000 mgr.y (mgr.24335) 16 : cephadm [INF] Deploying daemon node-exporter.a on vm00 2026-03-09T18:22:34.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:33 vm08 bash[17774]: cluster 2026-03-09T18:22:32.424271+0000 mgr.y (mgr.24335) 17 : cluster [DBG] pgmap v5: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:34.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:33 vm08 bash[17774]: audit 2026-03-09T18:22:32.906469+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:34.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:33 vm08 bash[17774]: cephadm 2026-03-09T18:22:32.909898+0000 mgr.y (mgr.24335) 18 : cephadm [INF] Deploying daemon node-exporter.b on vm08 2026-03-09T18:22:34.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:33 vm08 bash[17774]: audit 2026-03-09T18:22:33.459157+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:34.284 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:33 vm00 bash[22468]: cephadm 2026-03-09T18:22:32.052877+0000 mgr.y (mgr.24335) 10 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:22:34.284 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:33 vm00 bash[22468]: cephadm 2026-03-09T18:22:32.107144+0000 mgr.y (mgr.24335) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:22:34.284 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:33 vm00 bash[22468]: cephadm 2026-03-09T18:22:32.186118+0000 mgr.y (mgr.24335) 12 : cephadm [INF] Adjusting osd_memory_target on vm08 to 113.9M 2026-03-09T18:22:34.284 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:33 vm00 bash[22468]: cephadm 2026-03-09T18:22:32.186735+0000 mgr.y (mgr.24335) 13 : cephadm [WRN] Unable to set osd_memory_target on vm08 to 119478988: error parsing value: Value '119478988' is below minimum 939524096 2026-03-09T18:22:34.284 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:33 vm00 bash[22468]: cephadm 2026-03-09T18:22:32.186794+0000 mgr.y (mgr.24335) 14 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T18:22:34.284 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:33 vm00 bash[22468]: cephadm 2026-03-09T18:22:32.246430+0000 mgr.y (mgr.24335) 15 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:22:34.284 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:33 vm00 bash[22468]: cephadm 2026-03-09T18:22:32.321710+0000 mgr.y (mgr.24335) 16 : cephadm [INF] Deploying daemon node-exporter.a on vm00 2026-03-09T18:22:34.284 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:33 vm00 bash[22468]: cluster 2026-03-09T18:22:32.424271+0000 mgr.y (mgr.24335) 17 : cluster [DBG] pgmap v5: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:34.284 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:33 vm00 bash[22468]: audit 2026-03-09T18:22:32.906469+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:34.284 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:33 vm00 bash[22468]: cephadm 2026-03-09T18:22:32.909898+0000 mgr.y (mgr.24335) 18 : cephadm [INF] Deploying daemon node-exporter.b on vm08 2026-03-09T18:22:34.284 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:33 vm00 bash[22468]: audit 2026-03-09T18:22:33.459157+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:34.285 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:34 vm00 bash[37273]: v1.3.1: Pulling from prometheus/node-exporter 2026-03-09T18:22:34.285 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:33 vm00 bash[17468]: cephadm 2026-03-09T18:22:32.052877+0000 mgr.y (mgr.24335) 10 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:22:34.285 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:33 vm00 bash[17468]: cephadm 2026-03-09T18:22:32.107144+0000 mgr.y (mgr.24335) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:22:34.285 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:33 vm00 bash[17468]: cephadm 2026-03-09T18:22:32.186118+0000 mgr.y (mgr.24335) 12 : cephadm [INF] Adjusting osd_memory_target on vm08 to 113.9M 2026-03-09T18:22:34.285 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:33 vm00 bash[17468]: cephadm 2026-03-09T18:22:32.186735+0000 mgr.y (mgr.24335) 13 : cephadm [WRN] Unable to set osd_memory_target on vm08 to 119478988: error parsing value: Value '119478988' is below minimum 939524096 2026-03-09T18:22:34.285 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:33 vm00 bash[17468]: cephadm 2026-03-09T18:22:32.186794+0000 mgr.y (mgr.24335) 14 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T18:22:34.285 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:33 vm00 bash[17468]: cephadm 2026-03-09T18:22:32.246430+0000 mgr.y (mgr.24335) 15 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:22:34.285 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:33 vm00 bash[17468]: cephadm 2026-03-09T18:22:32.321710+0000 mgr.y (mgr.24335) 16 : cephadm [INF] Deploying daemon node-exporter.a on vm00 2026-03-09T18:22:34.285 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:33 vm00 bash[17468]: cluster 2026-03-09T18:22:32.424271+0000 mgr.y (mgr.24335) 17 : cluster [DBG] pgmap v5: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:34.285 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:33 vm00 bash[17468]: audit 2026-03-09T18:22:32.906469+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:34.285 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:33 vm00 bash[17468]: cephadm 2026-03-09T18:22:32.909898+0000 mgr.y (mgr.24335) 18 : cephadm [INF] Deploying daemon node-exporter.b on vm08 2026-03-09T18:22:34.285 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:33 vm00 bash[17468]: audit 2026-03-09T18:22:33.459157+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:34.884 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:34 vm00 bash[37273]: aa2a8d90b84c: Pulling fs layer 2026-03-09T18:22:34.884 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:34 vm00 bash[37273]: b45d31ee2d7f: Pulling fs layer 2026-03-09T18:22:34.884 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:34 vm00 bash[37273]: b5db1e299295: Pulling fs layer 2026-03-09T18:22:35.174 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:34 vm00 bash[22468]: cephadm 2026-03-09T18:22:33.468640+0000 mgr.y (mgr.24335) 19 : cephadm [INF] Deploying daemon prometheus.a on vm08 2026-03-09T18:22:35.174 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:34 vm00 bash[22468]: audit 2026-03-09T18:22:33.930588+0000 mon.c (mon.1) 43 : audit [DBG] from='client.? 192.168.123.100:0/904327410' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T18:22:35.174 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:34 vm00 bash[17468]: cephadm 2026-03-09T18:22:33.468640+0000 mgr.y (mgr.24335) 19 : cephadm [INF] Deploying daemon prometheus.a on vm08 2026-03-09T18:22:35.174 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:34 vm00 bash[17468]: audit 2026-03-09T18:22:33.930588+0000 mon.c (mon.1) 43 : audit [DBG] from='client.? 192.168.123.100:0/904327410' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T18:22:35.175 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: b45d31ee2d7f: Verifying Checksum 2026-03-09T18:22:35.175 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: b45d31ee2d7f: Download complete 2026-03-09T18:22:35.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:34 vm08 bash[17774]: cephadm 2026-03-09T18:22:33.468640+0000 mgr.y (mgr.24335) 19 : cephadm [INF] Deploying daemon prometheus.a on vm08 2026-03-09T18:22:35.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:34 vm08 bash[17774]: audit 2026-03-09T18:22:33.930588+0000 mon.c (mon.1) 43 : audit [DBG] from='client.? 192.168.123.100:0/904327410' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T18:22:35.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:34 vm08 bash[32685]: v1.3.1: Pulling from prometheus/node-exporter 2026-03-09T18:22:35.453 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: aa2a8d90b84c: Download complete 2026-03-09T18:22:35.453 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: aa2a8d90b84c: Pull complete 2026-03-09T18:22:35.454 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: b5db1e299295: Verifying Checksum 2026-03-09T18:22:35.454 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: b5db1e299295: Download complete 2026-03-09T18:22:35.454 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: b45d31ee2d7f: Pull complete 2026-03-09T18:22:35.725 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:35 vm08 bash[32685]: aa2a8d90b84c: Pulling fs layer 2026-03-09T18:22:35.725 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:35 vm08 bash[32685]: b45d31ee2d7f: Pulling fs layer 2026-03-09T18:22:35.725 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:35 vm08 bash[32685]: b5db1e299295: Pulling fs layer 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: b5db1e299295: Pull complete 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: Digest: sha256:f2269e73124dd0f60a7d19a2ce1264d33d08a985aed0ee6b0b89d0be470592cd 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.3.1 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.590Z caller=node_exporter.go:182 level=info msg="Starting node_exporter" version="(version=1.3.1, branch=HEAD, revision=a2321e7b940ddcff26873612bccdf7cd4c42b6b6)" 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.590Z caller=node_exporter.go:183 level=info msg="Build context" build_context="(go=go1.17.3, user=root@243aafa5525c, date=20211205-11:09:49)" 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.591Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+)($|/) 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.591Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:108 level=info msg="Enabled collectors" 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:115 level=info collector=arp 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:115 level=info collector=bcache 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:115 level=info collector=bonding 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:115 level=info collector=btrfs 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:115 level=info collector=conntrack 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:115 level=info collector=cpu 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:115 level=info collector=cpufreq 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:115 level=info collector=diskstats 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:115 level=info collector=dmi 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:115 level=info collector=edac 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:115 level=info collector=entropy 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:115 level=info collector=fibrechannel 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:115 level=info collector=filefd 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:115 level=info collector=filesystem 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:115 level=info collector=hwmon 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:115 level=info collector=infiniband 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:115 level=info collector=ipvs 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:115 level=info collector=loadavg 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:115 level=info collector=mdadm 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:115 level=info collector=meminfo 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:115 level=info collector=netclass 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:115 level=info collector=netdev 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:115 level=info collector=netstat 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:115 level=info collector=nfs 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:115 level=info collector=nfsd 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:115 level=info collector=nvme 2026-03-09T18:22:35.885 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:115 level=info collector=os 2026-03-09T18:22:35.886 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.592Z caller=node_exporter.go:115 level=info collector=powersupplyclass 2026-03-09T18:22:35.886 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.593Z caller=node_exporter.go:115 level=info collector=pressure 2026-03-09T18:22:35.886 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.593Z caller=node_exporter.go:115 level=info collector=rapl 2026-03-09T18:22:35.886 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.593Z caller=node_exporter.go:115 level=info collector=schedstat 2026-03-09T18:22:35.886 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.593Z caller=node_exporter.go:115 level=info collector=sockstat 2026-03-09T18:22:35.886 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.593Z caller=node_exporter.go:115 level=info collector=softnet 2026-03-09T18:22:35.886 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.593Z caller=node_exporter.go:115 level=info collector=stat 2026-03-09T18:22:35.886 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.593Z caller=node_exporter.go:115 level=info collector=tapestats 2026-03-09T18:22:35.886 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.593Z caller=node_exporter.go:115 level=info collector=textfile 2026-03-09T18:22:35.886 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.593Z caller=node_exporter.go:115 level=info collector=thermal_zone 2026-03-09T18:22:35.886 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.593Z caller=node_exporter.go:115 level=info collector=time 2026-03-09T18:22:35.886 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.593Z caller=node_exporter.go:115 level=info collector=udp_queues 2026-03-09T18:22:35.886 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.593Z caller=node_exporter.go:115 level=info collector=uname 2026-03-09T18:22:35.886 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.593Z caller=node_exporter.go:115 level=info collector=vmstat 2026-03-09T18:22:35.886 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.593Z caller=node_exporter.go:115 level=info collector=xfs 2026-03-09T18:22:35.886 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.593Z caller=node_exporter.go:115 level=info collector=zfs 2026-03-09T18:22:35.886 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.593Z caller=node_exporter.go:199 level=info msg="Listening on" address=:9100 2026-03-09T18:22:35.886 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[37273]: ts=2026-03-09T18:22:35.593Z caller=tls_config.go:195 level=info msg="TLS is disabled." http2=false 2026-03-09T18:22:36.085 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:35 vm08 bash[17774]: cluster 2026-03-09T18:22:34.424551+0000 mgr.y (mgr.24335) 20 : cluster [DBG] pgmap v6: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:36.085 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:35 vm08 bash[32685]: b45d31ee2d7f: Verifying Checksum 2026-03-09T18:22:36.085 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:35 vm08 bash[32685]: b45d31ee2d7f: Download complete 2026-03-09T18:22:36.085 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:35 vm08 bash[32685]: aa2a8d90b84c: Verifying Checksum 2026-03-09T18:22:36.085 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:35 vm08 bash[32685]: aa2a8d90b84c: Download complete 2026-03-09T18:22:36.085 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:35 vm08 bash[32685]: aa2a8d90b84c: Pull complete 2026-03-09T18:22:36.085 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:35 vm08 bash[32685]: b5db1e299295: Verifying Checksum 2026-03-09T18:22:36.085 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:35 vm08 bash[32685]: b5db1e299295: Download complete 2026-03-09T18:22:36.085 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:35 vm08 bash[32685]: b45d31ee2d7f: Pull complete 2026-03-09T18:22:36.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:35 vm00 bash[17468]: cluster 2026-03-09T18:22:34.424551+0000 mgr.y (mgr.24335) 20 : cluster [DBG] pgmap v6: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:36.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:35 vm00 bash[22468]: cluster 2026-03-09T18:22:34.424551+0000 mgr.y (mgr.24335) 20 : cluster [DBG] pgmap v6: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:36.475 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: b5db1e299295: Pull complete 2026-03-09T18:22:36.475 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: Digest: sha256:f2269e73124dd0f60a7d19a2ce1264d33d08a985aed0ee6b0b89d0be470592cd 2026-03-09T18:22:36.475 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.3.1 2026-03-09T18:22:36.475 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.237Z caller=node_exporter.go:182 level=info msg="Starting node_exporter" version="(version=1.3.1, branch=HEAD, revision=a2321e7b940ddcff26873612bccdf7cd4c42b6b6)" 2026-03-09T18:22:36.475 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.237Z caller=node_exporter.go:183 level=info msg="Build context" build_context="(go=go1.17.3, user=root@243aafa5525c, date=20211205-11:09:49)" 2026-03-09T18:22:36.475 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.238Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+)($|/) 2026-03-09T18:22:36.475 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.238Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-09T18:22:36.475 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.239Z caller=node_exporter.go:108 level=info msg="Enabled collectors" 2026-03-09T18:22:36.475 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.239Z caller=node_exporter.go:115 level=info collector=arp 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.239Z caller=node_exporter.go:115 level=info collector=bcache 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.240Z caller=node_exporter.go:115 level=info collector=bonding 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.240Z caller=node_exporter.go:115 level=info collector=btrfs 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.240Z caller=node_exporter.go:115 level=info collector=conntrack 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.240Z caller=node_exporter.go:115 level=info collector=cpu 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.240Z caller=node_exporter.go:115 level=info collector=cpufreq 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.240Z caller=node_exporter.go:115 level=info collector=diskstats 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.240Z caller=node_exporter.go:115 level=info collector=dmi 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.241Z caller=node_exporter.go:115 level=info collector=edac 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.241Z caller=node_exporter.go:115 level=info collector=entropy 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.241Z caller=node_exporter.go:115 level=info collector=fibrechannel 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.241Z caller=node_exporter.go:115 level=info collector=filefd 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.241Z caller=node_exporter.go:115 level=info collector=filesystem 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.241Z caller=node_exporter.go:115 level=info collector=hwmon 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.241Z caller=node_exporter.go:115 level=info collector=infiniband 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.242Z caller=node_exporter.go:115 level=info collector=ipvs 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.242Z caller=node_exporter.go:115 level=info collector=loadavg 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.242Z caller=node_exporter.go:115 level=info collector=mdadm 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.242Z caller=node_exporter.go:115 level=info collector=meminfo 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.242Z caller=node_exporter.go:115 level=info collector=netclass 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.242Z caller=node_exporter.go:115 level=info collector=netdev 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.242Z caller=node_exporter.go:115 level=info collector=netstat 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.242Z caller=node_exporter.go:115 level=info collector=nfs 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.243Z caller=node_exporter.go:115 level=info collector=nfsd 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.243Z caller=node_exporter.go:115 level=info collector=nvme 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.243Z caller=node_exporter.go:115 level=info collector=os 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.243Z caller=node_exporter.go:115 level=info collector=powersupplyclass 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.243Z caller=node_exporter.go:115 level=info collector=pressure 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.243Z caller=node_exporter.go:115 level=info collector=rapl 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.243Z caller=node_exporter.go:115 level=info collector=schedstat 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.244Z caller=node_exporter.go:115 level=info collector=sockstat 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.244Z caller=node_exporter.go:115 level=info collector=softnet 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.244Z caller=node_exporter.go:115 level=info collector=stat 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.244Z caller=node_exporter.go:115 level=info collector=tapestats 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.244Z caller=node_exporter.go:115 level=info collector=textfile 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.244Z caller=node_exporter.go:115 level=info collector=thermal_zone 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.244Z caller=node_exporter.go:115 level=info collector=time 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.244Z caller=node_exporter.go:115 level=info collector=udp_queues 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.244Z caller=node_exporter.go:115 level=info collector=uname 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.244Z caller=node_exporter.go:115 level=info collector=vmstat 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.244Z caller=node_exporter.go:115 level=info collector=xfs 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.244Z caller=node_exporter.go:115 level=info collector=zfs 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.245Z caller=node_exporter.go:199 level=info msg="Listening on" address=:9100 2026-03-09T18:22:36.476 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:36 vm08 bash[32685]: ts=2026-03-09T18:22:36.245Z caller=tls_config.go:195 level=info msg="TLS is disabled." http2=false 2026-03-09T18:22:37.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:37 vm00 bash[22468]: cluster 2026-03-09T18:22:36.424857+0000 mgr.y (mgr.24335) 21 : cluster [DBG] pgmap v7: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:37.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:37 vm00 bash[17468]: cluster 2026-03-09T18:22:36.424857+0000 mgr.y (mgr.24335) 21 : cluster [DBG] pgmap v7: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:37.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:37 vm08 bash[17774]: cluster 2026-03-09T18:22:36.424857+0000 mgr.y (mgr.24335) 21 : cluster [DBG] pgmap v7: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:37.605 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/mon.c/config 2026-03-09T18:22:37.964 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:22:37.964 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":51,"fsid":"614f4990-1be4-11f1-8b84-dfd1edd9d965","created":"2026-03-09T18:19:14.917724+0000","modified":"2026-03-09T18:22:28.355212+0000","last_up_change":"2026-03-09T18:22:16.082402+0000","last_in_change":"2026-03-09T18:22:03.206760+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"quincy","pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T18:20:56.568822+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"22","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}}}],"osds":[{"osd":0,"uuid":"b0cac7d6-07bf-4b00-9243-24f6ec5bc470","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":48,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":4034633438},{"type":"v1","addr":"192.168.123.100:6803","nonce":4034633438}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":4034633438},{"type":"v1","addr":"192.168.123.100:6805","nonce":4034633438}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":4034633438},{"type":"v1","addr":"192.168.123.100:6809","nonce":4034633438}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":4034633438},{"type":"v1","addr":"192.168.123.100:6807","nonce":4034633438}]},"public_addr":"192.168.123.100:6803/4034633438","cluster_addr":"192.168.123.100:6805/4034633438","heartbeat_back_addr":"192.168.123.100:6809/4034633438","heartbeat_front_addr":"192.168.123.100:6807/4034633438","state":["exists","up"]},{"osd":1,"uuid":"9fc1e6b3-451c-497e-a994-131046179fb9","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":31,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":3881919578},{"type":"v1","addr":"192.168.123.100:6811","nonce":3881919578}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6812","nonce":3881919578},{"type":"v1","addr":"192.168.123.100:6813","nonce":3881919578}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6816","nonce":3881919578},{"type":"v1","addr":"192.168.123.100:6817","nonce":3881919578}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":3881919578},{"type":"v1","addr":"192.168.123.100:6815","nonce":3881919578}]},"public_addr":"192.168.123.100:6811/3881919578","cluster_addr":"192.168.123.100:6813/3881919578","heartbeat_back_addr":"192.168.123.100:6817/3881919578","heartbeat_front_addr":"192.168.123.100:6815/3881919578","state":["exists","up"]},{"osd":2,"uuid":"b6754d4f-0b5b-4d48-8415-b590ff7d2cdb","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6818","nonce":1380134913},{"type":"v1","addr":"192.168.123.100:6819","nonce":1380134913}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6820","nonce":1380134913},{"type":"v1","addr":"192.168.123.100:6821","nonce":1380134913}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6824","nonce":1380134913},{"type":"v1","addr":"192.168.123.100:6825","nonce":1380134913}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6822","nonce":1380134913},{"type":"v1","addr":"192.168.123.100:6823","nonce":1380134913}]},"public_addr":"192.168.123.100:6819/1380134913","cluster_addr":"192.168.123.100:6821/1380134913","heartbeat_back_addr":"192.168.123.100:6825/1380134913","heartbeat_front_addr":"192.168.123.100:6823/1380134913","state":["exists","up"]},{"osd":3,"uuid":"04bdb6c0-c351-4b7e-b364-865748cfae11","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":25,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6826","nonce":51325005},{"type":"v1","addr":"192.168.123.100:6827","nonce":51325005}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6828","nonce":51325005},{"type":"v1","addr":"192.168.123.100:6829","nonce":51325005}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6832","nonce":51325005},{"type":"v1","addr":"192.168.123.100:6833","nonce":51325005}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6830","nonce":51325005},{"type":"v1","addr":"192.168.123.100:6831","nonce":51325005}]},"public_addr":"192.168.123.100:6827/51325005","cluster_addr":"192.168.123.100:6829/51325005","heartbeat_back_addr":"192.168.123.100:6833/51325005","heartbeat_front_addr":"192.168.123.100:6831/51325005","state":["exists","up"]},{"osd":4,"uuid":"28dbafde-327a-4cb7-aaf4-8f0bed8a7a21","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":30,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6800","nonce":3738925586},{"type":"v1","addr":"192.168.123.108:6801","nonce":3738925586}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6802","nonce":3738925586},{"type":"v1","addr":"192.168.123.108:6803","nonce":3738925586}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6806","nonce":3738925586},{"type":"v1","addr":"192.168.123.108:6807","nonce":3738925586}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6804","nonce":3738925586},{"type":"v1","addr":"192.168.123.108:6805","nonce":3738925586}]},"public_addr":"192.168.123.108:6801/3738925586","cluster_addr":"192.168.123.108:6803/3738925586","heartbeat_back_addr":"192.168.123.108:6807/3738925586","heartbeat_front_addr":"192.168.123.108:6805/3738925586","state":["exists","up"]},{"osd":5,"uuid":"c8fd35d5-49cd-4d8e-981a-afb708e47c9d","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":36,"up_thru":37,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6808","nonce":3115835875},{"type":"v1","addr":"192.168.123.108:6809","nonce":3115835875}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6810","nonce":3115835875},{"type":"v1","addr":"192.168.123.108:6811","nonce":3115835875}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6814","nonce":3115835875},{"type":"v1","addr":"192.168.123.108:6815","nonce":3115835875}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6812","nonce":3115835875},{"type":"v1","addr":"192.168.123.108:6813","nonce":3115835875}]},"public_addr":"192.168.123.108:6809/3115835875","cluster_addr":"192.168.123.108:6811/3115835875","heartbeat_back_addr":"192.168.123.108:6815/3115835875","heartbeat_front_addr":"192.168.123.108:6813/3115835875","state":["exists","up"]},{"osd":6,"uuid":"fdedf8fe-f1d9-48e7-9db9-df7cf33b1093","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":42,"up_thru":43,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6816","nonce":3870182675},{"type":"v1","addr":"192.168.123.108:6817","nonce":3870182675}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6818","nonce":3870182675},{"type":"v1","addr":"192.168.123.108:6819","nonce":3870182675}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6822","nonce":3870182675},{"type":"v1","addr":"192.168.123.108:6823","nonce":3870182675}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6820","nonce":3870182675},{"type":"v1","addr":"192.168.123.108:6821","nonce":3870182675}]},"public_addr":"192.168.123.108:6817/3870182675","cluster_addr":"192.168.123.108:6819/3870182675","heartbeat_back_addr":"192.168.123.108:6823/3870182675","heartbeat_front_addr":"192.168.123.108:6821/3870182675","state":["exists","up"]},{"osd":7,"uuid":"e8972f61-b0b9-45d8-8b8e-e660f598240a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":48,"up_thru":49,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6824","nonce":1101522923},{"type":"v1","addr":"192.168.123.108:6825","nonce":1101522923}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6826","nonce":1101522923},{"type":"v1","addr":"192.168.123.108:6827","nonce":1101522923}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6830","nonce":1101522923},{"type":"v1","addr":"192.168.123.108:6831","nonce":1101522923}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6828","nonce":1101522923},{"type":"v1","addr":"192.168.123.108:6829","nonce":1101522923}]},"public_addr":"192.168.123.108:6825/1101522923","cluster_addr":"192.168.123.108:6827/1101522923","heartbeat_back_addr":"192.168.123.108:6831/1101522923","heartbeat_front_addr":"192.168.123.108:6829/1101522923","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:20:22.289419+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:20:38.302448+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:20:53.686141+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:21:09.836659+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:21:25.246445+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:21:42.004834+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:21:57.459466+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:22:15.038732+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.100:0/1438077138":"2026-03-10T18:22:28.355170+0000","192.168.123.100:0/3565704494":"2026-03-10T18:22:28.355170+0000","192.168.123.100:0/2057130512":"2026-03-10T18:22:28.355170+0000","192.168.123.100:0/2374936913":"2026-03-10T18:22:28.355170+0000","192.168.123.100:6800/1230841882":"2026-03-10T18:22:28.355170+0000","192.168.123.100:0/2948627942":"2026-03-10T18:19:40.307892+0000","192.168.123.100:6801/1230841882":"2026-03-10T18:22:28.355170+0000","192.168.123.100:0/1939871250":"2026-03-10T18:19:40.307892+0000","192.168.123.100:6800/1514471438":"2026-03-10T18:19:40.307892+0000","192.168.123.100:6801/1514471438":"2026-03-10T18:19:40.307892+0000","192.168.123.100:0/2433128758":"2026-03-10T18:19:29.532285+0000","192.168.123.100:0/3360653556":"2026-03-10T18:19:40.307892+0000","192.168.123.100:0/4158221249":"2026-03-10T18:19:29.532285+0000","192.168.123.100:6801/4196289624":"2026-03-10T18:19:29.532285+0000","192.168.123.100:6800/4196289624":"2026-03-10T18:19:29.532285+0000","192.168.123.100:0/3505528954":"2026-03-10T18:19:29.532285+0000"},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T18:22:38.019 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-09T18:22:38.019 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph osd dump --format=json 2026-03-09T18:22:38.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:38 vm00 bash[22468]: audit 2026-03-09T18:22:37.963412+0000 mon.b (mon.2) 29 : audit [DBG] from='client.? 192.168.123.100:0/4222650502' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:22:38.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:38 vm00 bash[17468]: audit 2026-03-09T18:22:37.963412+0000 mon.b (mon.2) 29 : audit [DBG] from='client.? 192.168.123.100:0/4222650502' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:22:38.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:38 vm08 bash[17774]: audit 2026-03-09T18:22:37.963412+0000 mon.b (mon.2) 29 : audit [DBG] from='client.? 192.168.123.100:0/4222650502' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:22:39.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:39 vm08 bash[17774]: cluster 2026-03-09T18:22:38.425140+0000 mgr.y (mgr.24335) 22 : cluster [DBG] pgmap v8: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:39.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:39 vm08 bash[17774]: audit 2026-03-09T18:22:38.467135+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:39.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:39 vm00 bash[22468]: cluster 2026-03-09T18:22:38.425140+0000 mgr.y (mgr.24335) 22 : cluster [DBG] pgmap v8: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:39.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:39 vm00 bash[22468]: audit 2026-03-09T18:22:38.467135+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:39.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:39 vm00 bash[17468]: cluster 2026-03-09T18:22:38.425140+0000 mgr.y (mgr.24335) 22 : cluster [DBG] pgmap v8: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:39.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:39 vm00 bash[17468]: audit 2026-03-09T18:22:38.467135+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:40.059 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:39 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:40.059 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:39 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:40.060 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:39 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:40.060 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:39 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:40.060 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:22:39 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:40.060 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:22:39 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:40.060 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:22:39 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:40.060 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:22:39 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:40.060 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:22:39 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:40.060 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:22:39 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:40.060 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:22:39 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:40.060 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:22:39 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:40.060 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:22:39 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:40.060 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:22:39 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:40.060 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:22:39 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:40.060 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:22:39 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:40.060 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:39 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:40.060 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:39 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:40.384 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:22:40 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:40.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:22:40 vm08 systemd[1]: Started Ceph prometheus.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:22:40.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:22:40 vm08 bash[33074]: ts=2026-03-09T18:22:40.208Z caller=main.go:475 level=info msg="No time or size retention was set so using the default time retention" duration=15d 2026-03-09T18:22:40.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:22:40 vm08 bash[33074]: ts=2026-03-09T18:22:40.208Z caller=main.go:512 level=info msg="Starting Prometheus" version="(version=2.33.4, branch=HEAD, revision=83032011a5d3e6102624fe58241a374a7201fee8)" 2026-03-09T18:22:40.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:22:40 vm08 bash[33074]: ts=2026-03-09T18:22:40.208Z caller=main.go:517 level=info build_context="(go=go1.17.7, user=root@d13bf69e7be8, date=20220222-16:51:28)" 2026-03-09T18:22:40.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:22:40 vm08 bash[33074]: ts=2026-03-09T18:22:40.208Z caller=main.go:518 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm08 (none))" 2026-03-09T18:22:40.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:22:40 vm08 bash[33074]: ts=2026-03-09T18:22:40.208Z caller=main.go:519 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-09T18:22:40.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:22:40 vm08 bash[33074]: ts=2026-03-09T18:22:40.208Z caller=main.go:520 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-09T18:22:40.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:22:40 vm08 bash[33074]: ts=2026-03-09T18:22:40.210Z caller=web.go:570 level=info component=web msg="Start listening for connections" address=:9095 2026-03-09T18:22:40.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:22:40 vm08 bash[33074]: ts=2026-03-09T18:22:40.210Z caller=main.go:923 level=info msg="Starting TSDB ..." 2026-03-09T18:22:40.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:22:40 vm08 bash[33074]: ts=2026-03-09T18:22:40.211Z caller=tls_config.go:195 level=info component=web msg="TLS is disabled." http2=false 2026-03-09T18:22:40.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:22:40 vm08 bash[33074]: ts=2026-03-09T18:22:40.212Z caller=head.go:493 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-09T18:22:40.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:22:40 vm08 bash[33074]: ts=2026-03-09T18:22:40.212Z caller=head.go:527 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.863µs 2026-03-09T18:22:40.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:22:40 vm08 bash[33074]: ts=2026-03-09T18:22:40.212Z caller=head.go:533 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-09T18:22:40.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:22:40 vm08 bash[33074]: ts=2026-03-09T18:22:40.213Z caller=head.go:604 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 2026-03-09T18:22:40.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:22:40 vm08 bash[33074]: ts=2026-03-09T18:22:40.213Z caller=head.go:610 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=95.258µs wal_replay_duration=213.559µs total_replay_duration=454.53µs 2026-03-09T18:22:40.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:22:40 vm08 bash[33074]: ts=2026-03-09T18:22:40.213Z caller=main.go:944 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-09T18:22:40.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:22:40 vm08 bash[33074]: ts=2026-03-09T18:22:40.213Z caller=main.go:947 level=info msg="TSDB started" 2026-03-09T18:22:40.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:22:40 vm08 bash[33074]: ts=2026-03-09T18:22:40.213Z caller=main.go:1128 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-09T18:22:40.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:22:40 vm08 bash[33074]: ts=2026-03-09T18:22:40.227Z caller=main.go:1165 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=13.746204ms db_storage=622ns remote_storage=1.292µs web_handler=240ns query_engine=872ns scrape=749.673µs scrape_sd=28.753µs notify=862ns notify_sd=811ns rules=12.71159ms 2026-03-09T18:22:40.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:22:40 vm08 bash[33074]: ts=2026-03-09T18:22:40.227Z caller=main.go:896 level=info msg="Server is ready to receive web requests." 2026-03-09T18:22:40.645 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/mon.c/config 2026-03-09T18:22:41.005 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:22:41.005 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":51,"fsid":"614f4990-1be4-11f1-8b84-dfd1edd9d965","created":"2026-03-09T18:19:14.917724+0000","modified":"2026-03-09T18:22:28.355212+0000","last_up_change":"2026-03-09T18:22:16.082402+0000","last_in_change":"2026-03-09T18:22:03.206760+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"quincy","pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T18:20:56.568822+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"22","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}}}],"osds":[{"osd":0,"uuid":"b0cac7d6-07bf-4b00-9243-24f6ec5bc470","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":48,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":4034633438},{"type":"v1","addr":"192.168.123.100:6803","nonce":4034633438}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":4034633438},{"type":"v1","addr":"192.168.123.100:6805","nonce":4034633438}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":4034633438},{"type":"v1","addr":"192.168.123.100:6809","nonce":4034633438}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":4034633438},{"type":"v1","addr":"192.168.123.100:6807","nonce":4034633438}]},"public_addr":"192.168.123.100:6803/4034633438","cluster_addr":"192.168.123.100:6805/4034633438","heartbeat_back_addr":"192.168.123.100:6809/4034633438","heartbeat_front_addr":"192.168.123.100:6807/4034633438","state":["exists","up"]},{"osd":1,"uuid":"9fc1e6b3-451c-497e-a994-131046179fb9","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":31,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":3881919578},{"type":"v1","addr":"192.168.123.100:6811","nonce":3881919578}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6812","nonce":3881919578},{"type":"v1","addr":"192.168.123.100:6813","nonce":3881919578}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6816","nonce":3881919578},{"type":"v1","addr":"192.168.123.100:6817","nonce":3881919578}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":3881919578},{"type":"v1","addr":"192.168.123.100:6815","nonce":3881919578}]},"public_addr":"192.168.123.100:6811/3881919578","cluster_addr":"192.168.123.100:6813/3881919578","heartbeat_back_addr":"192.168.123.100:6817/3881919578","heartbeat_front_addr":"192.168.123.100:6815/3881919578","state":["exists","up"]},{"osd":2,"uuid":"b6754d4f-0b5b-4d48-8415-b590ff7d2cdb","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6818","nonce":1380134913},{"type":"v1","addr":"192.168.123.100:6819","nonce":1380134913}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6820","nonce":1380134913},{"type":"v1","addr":"192.168.123.100:6821","nonce":1380134913}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6824","nonce":1380134913},{"type":"v1","addr":"192.168.123.100:6825","nonce":1380134913}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6822","nonce":1380134913},{"type":"v1","addr":"192.168.123.100:6823","nonce":1380134913}]},"public_addr":"192.168.123.100:6819/1380134913","cluster_addr":"192.168.123.100:6821/1380134913","heartbeat_back_addr":"192.168.123.100:6825/1380134913","heartbeat_front_addr":"192.168.123.100:6823/1380134913","state":["exists","up"]},{"osd":3,"uuid":"04bdb6c0-c351-4b7e-b364-865748cfae11","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":25,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6826","nonce":51325005},{"type":"v1","addr":"192.168.123.100:6827","nonce":51325005}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6828","nonce":51325005},{"type":"v1","addr":"192.168.123.100:6829","nonce":51325005}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6832","nonce":51325005},{"type":"v1","addr":"192.168.123.100:6833","nonce":51325005}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6830","nonce":51325005},{"type":"v1","addr":"192.168.123.100:6831","nonce":51325005}]},"public_addr":"192.168.123.100:6827/51325005","cluster_addr":"192.168.123.100:6829/51325005","heartbeat_back_addr":"192.168.123.100:6833/51325005","heartbeat_front_addr":"192.168.123.100:6831/51325005","state":["exists","up"]},{"osd":4,"uuid":"28dbafde-327a-4cb7-aaf4-8f0bed8a7a21","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":30,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6800","nonce":3738925586},{"type":"v1","addr":"192.168.123.108:6801","nonce":3738925586}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6802","nonce":3738925586},{"type":"v1","addr":"192.168.123.108:6803","nonce":3738925586}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6806","nonce":3738925586},{"type":"v1","addr":"192.168.123.108:6807","nonce":3738925586}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6804","nonce":3738925586},{"type":"v1","addr":"192.168.123.108:6805","nonce":3738925586}]},"public_addr":"192.168.123.108:6801/3738925586","cluster_addr":"192.168.123.108:6803/3738925586","heartbeat_back_addr":"192.168.123.108:6807/3738925586","heartbeat_front_addr":"192.168.123.108:6805/3738925586","state":["exists","up"]},{"osd":5,"uuid":"c8fd35d5-49cd-4d8e-981a-afb708e47c9d","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":36,"up_thru":37,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6808","nonce":3115835875},{"type":"v1","addr":"192.168.123.108:6809","nonce":3115835875}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6810","nonce":3115835875},{"type":"v1","addr":"192.168.123.108:6811","nonce":3115835875}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6814","nonce":3115835875},{"type":"v1","addr":"192.168.123.108:6815","nonce":3115835875}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6812","nonce":3115835875},{"type":"v1","addr":"192.168.123.108:6813","nonce":3115835875}]},"public_addr":"192.168.123.108:6809/3115835875","cluster_addr":"192.168.123.108:6811/3115835875","heartbeat_back_addr":"192.168.123.108:6815/3115835875","heartbeat_front_addr":"192.168.123.108:6813/3115835875","state":["exists","up"]},{"osd":6,"uuid":"fdedf8fe-f1d9-48e7-9db9-df7cf33b1093","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":42,"up_thru":43,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6816","nonce":3870182675},{"type":"v1","addr":"192.168.123.108:6817","nonce":3870182675}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6818","nonce":3870182675},{"type":"v1","addr":"192.168.123.108:6819","nonce":3870182675}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6822","nonce":3870182675},{"type":"v1","addr":"192.168.123.108:6823","nonce":3870182675}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6820","nonce":3870182675},{"type":"v1","addr":"192.168.123.108:6821","nonce":3870182675}]},"public_addr":"192.168.123.108:6817/3870182675","cluster_addr":"192.168.123.108:6819/3870182675","heartbeat_back_addr":"192.168.123.108:6823/3870182675","heartbeat_front_addr":"192.168.123.108:6821/3870182675","state":["exists","up"]},{"osd":7,"uuid":"e8972f61-b0b9-45d8-8b8e-e660f598240a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":48,"up_thru":49,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6824","nonce":1101522923},{"type":"v1","addr":"192.168.123.108:6825","nonce":1101522923}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6826","nonce":1101522923},{"type":"v1","addr":"192.168.123.108:6827","nonce":1101522923}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6830","nonce":1101522923},{"type":"v1","addr":"192.168.123.108:6831","nonce":1101522923}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6828","nonce":1101522923},{"type":"v1","addr":"192.168.123.108:6829","nonce":1101522923}]},"public_addr":"192.168.123.108:6825/1101522923","cluster_addr":"192.168.123.108:6827/1101522923","heartbeat_back_addr":"192.168.123.108:6831/1101522923","heartbeat_front_addr":"192.168.123.108:6829/1101522923","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:20:22.289419+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:20:38.302448+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:20:53.686141+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:21:09.836659+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:21:25.246445+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:21:42.004834+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:21:57.459466+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:22:15.038732+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.100:0/1438077138":"2026-03-10T18:22:28.355170+0000","192.168.123.100:0/3565704494":"2026-03-10T18:22:28.355170+0000","192.168.123.100:0/2057130512":"2026-03-10T18:22:28.355170+0000","192.168.123.100:0/2374936913":"2026-03-10T18:22:28.355170+0000","192.168.123.100:6800/1230841882":"2026-03-10T18:22:28.355170+0000","192.168.123.100:0/2948627942":"2026-03-10T18:19:40.307892+0000","192.168.123.100:6801/1230841882":"2026-03-10T18:22:28.355170+0000","192.168.123.100:0/1939871250":"2026-03-10T18:19:40.307892+0000","192.168.123.100:6800/1514471438":"2026-03-10T18:19:40.307892+0000","192.168.123.100:6801/1514471438":"2026-03-10T18:19:40.307892+0000","192.168.123.100:0/2433128758":"2026-03-10T18:19:29.532285+0000","192.168.123.100:0/3360653556":"2026-03-10T18:19:40.307892+0000","192.168.123.100:0/4158221249":"2026-03-10T18:19:29.532285+0000","192.168.123.100:6801/4196289624":"2026-03-10T18:19:29.532285+0000","192.168.123.100:6800/4196289624":"2026-03-10T18:19:29.532285+0000","192.168.123.100:0/3505528954":"2026-03-10T18:19:29.532285+0000"},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T18:22:41.052 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph tell osd.0 flush_pg_stats 2026-03-09T18:22:41.052 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph tell osd.1 flush_pg_stats 2026-03-09T18:22:41.053 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph tell osd.2 flush_pg_stats 2026-03-09T18:22:41.053 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph tell osd.3 flush_pg_stats 2026-03-09T18:22:41.053 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph tell osd.4 flush_pg_stats 2026-03-09T18:22:41.053 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph tell osd.5 flush_pg_stats 2026-03-09T18:22:41.053 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph tell osd.6 flush_pg_stats 2026-03-09T18:22:41.053 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph tell osd.7 flush_pg_stats 2026-03-09T18:22:41.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:41 vm00 bash[17468]: audit 2026-03-09T18:22:40.088065+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:41.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:41 vm00 bash[17468]: cephadm 2026-03-09T18:22:40.093593+0000 mgr.y (mgr.24335) 23 : cephadm [INF] Deploying daemon alertmanager.a on vm00 2026-03-09T18:22:41.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:41 vm00 bash[17468]: cluster 2026-03-09T18:22:40.425530+0000 mgr.y (mgr.24335) 24 : cluster [DBG] pgmap v9: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:41.385 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:41 vm00 bash[17468]: audit 2026-03-09T18:22:41.002300+0000 mon.c (mon.1) 44 : audit [DBG] from='client.? 192.168.123.100:0/502846804' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:22:41.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:41 vm00 bash[22468]: audit 2026-03-09T18:22:40.088065+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:41.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:41 vm00 bash[22468]: cephadm 2026-03-09T18:22:40.093593+0000 mgr.y (mgr.24335) 23 : cephadm [INF] Deploying daemon alertmanager.a on vm00 2026-03-09T18:22:41.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:41 vm00 bash[22468]: cluster 2026-03-09T18:22:40.425530+0000 mgr.y (mgr.24335) 24 : cluster [DBG] pgmap v9: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:41.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:41 vm00 bash[22468]: audit 2026-03-09T18:22:41.002300+0000 mon.c (mon.1) 44 : audit [DBG] from='client.? 192.168.123.100:0/502846804' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:22:41.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:41 vm08 bash[17774]: audit 2026-03-09T18:22:40.088065+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:41.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:41 vm08 bash[17774]: cephadm 2026-03-09T18:22:40.093593+0000 mgr.y (mgr.24335) 23 : cephadm [INF] Deploying daemon alertmanager.a on vm00 2026-03-09T18:22:41.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:41 vm08 bash[17774]: cluster 2026-03-09T18:22:40.425530+0000 mgr.y (mgr.24335) 24 : cluster [DBG] pgmap v9: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:41.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:41 vm08 bash[17774]: audit 2026-03-09T18:22:41.002300+0000 mon.c (mon.1) 44 : audit [DBG] from='client.? 192.168.123.100:0/502846804' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:22:43.737 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:43 vm00 bash[22468]: cluster 2026-03-09T18:22:42.425870+0000 mgr.y (mgr.24335) 25 : cluster [DBG] pgmap v10: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:43.737 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:43 vm00 bash[22468]: audit 2026-03-09T18:22:43.476730+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:43.737 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:43.737 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:43 vm00 bash[17468]: cluster 2026-03-09T18:22:42.425870+0000 mgr.y (mgr.24335) 25 : cluster [DBG] pgmap v10: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:43.737 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:43 vm00 bash[17468]: audit 2026-03-09T18:22:43.476730+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:43.737 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:43.737 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:22:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:43.737 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:22:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:43.737 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:43.737 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:22:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:43.737 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:22:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:43.737 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:43.737 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:22:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:43.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:43 vm08 bash[17774]: cluster 2026-03-09T18:22:42.425870+0000 mgr.y (mgr.24335) 25 : cluster [DBG] pgmap v10: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:43.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:43 vm08 bash[17774]: audit 2026-03-09T18:22:43.476730+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:44.130 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/mon.c/config 2026-03-09T18:22:44.131 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/mon.c/config 2026-03-09T18:22:44.133 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/mon.c/config 2026-03-09T18:22:44.133 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/mon.c/config 2026-03-09T18:22:44.138 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:44.138 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:44.138 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:44.138 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:22:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:44.138 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:22:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:44.138 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:22:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:44.138 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:22:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:44.138 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:22:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:44.139 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:22:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:44.139 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:22:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:44.139 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:22:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:44.139 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:22:43 vm00 systemd[1]: Started Ceph alertmanager.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:22:44.139 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/mon.c/config 2026-03-09T18:22:44.139 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/mon.c/config 2026-03-09T18:22:44.140 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/mon.c/config 2026-03-09T18:22:44.144 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/mon.c/config 2026-03-09T18:22:44.426 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:22:44 vm00 bash[38226]: level=info ts=2026-03-09T18:22:44.266Z caller=main.go:225 msg="Starting Alertmanager" version="(version=0.23.0, branch=HEAD, revision=61046b17771a57cfd4c4a51be370ab930a4d7d54)" 2026-03-09T18:22:44.426 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:22:44 vm00 bash[38226]: level=info ts=2026-03-09T18:22:44.266Z caller=main.go:226 build_context="(go=go1.16.7, user=root@e21a959be8d2, date=20210825-10:48:55)" 2026-03-09T18:22:44.426 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:22:44 vm00 bash[38226]: level=info ts=2026-03-09T18:22:44.269Z caller=cluster.go:184 component=cluster msg="setting advertise address explicitly" addr=192.168.123.100 port=9094 2026-03-09T18:22:44.426 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:22:44 vm00 bash[38226]: level=info ts=2026-03-09T18:22:44.269Z caller=cluster.go:671 component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-09T18:22:44.426 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:22:44 vm00 bash[38226]: level=info ts=2026-03-09T18:22:44.324Z caller=coordinator.go:113 component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T18:22:44.426 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:22:44 vm00 bash[38226]: level=info ts=2026-03-09T18:22:44.324Z caller=coordinator.go:126 component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T18:22:44.426 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:22:44 vm00 bash[38226]: level=info ts=2026-03-09T18:22:44.330Z caller=main.go:518 msg=Listening address=:9093 2026-03-09T18:22:44.426 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:22:44 vm00 bash[38226]: level=info ts=2026-03-09T18:22:44.330Z caller=tls_config.go:191 msg="TLS is disabled." http2=false 2026-03-09T18:22:44.475 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:44 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:45.251 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:45 vm00 bash[17468]: audit 2026-03-09T18:22:44.100819+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:45.251 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:45 vm00 bash[17468]: audit 2026-03-09T18:22:44.143481+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:45.252 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:45 vm00 bash[17468]: audit 2026-03-09T18:22:44.156180+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:45.252 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:45 vm00 bash[17468]: audit 2026-03-09T18:22:44.162687+0000 mon.c (mon.1) 45 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T18:22:45.252 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:45 vm00 bash[17468]: audit 2026-03-09T18:22:44.164411+0000 mgr.y (mgr.24335) 26 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T18:22:45.252 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:45 vm00 bash[17468]: audit 2026-03-09T18:22:44.178987+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:45.252 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:45 vm00 bash[17468]: cephadm 2026-03-09T18:22:44.204645+0000 mgr.y (mgr.24335) 27 : cephadm [INF] Deploying daemon grafana.a on vm08 2026-03-09T18:22:45.252 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:45 vm00 bash[17468]: cluster 2026-03-09T18:22:44.426144+0000 mgr.y (mgr.24335) 28 : cluster [DBG] pgmap v11: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:45.252 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:45 vm00 bash[22468]: audit 2026-03-09T18:22:44.100819+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:45.252 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:45 vm00 bash[22468]: audit 2026-03-09T18:22:44.143481+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:45.252 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:45 vm00 bash[22468]: audit 2026-03-09T18:22:44.156180+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:45.252 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:45 vm00 bash[22468]: audit 2026-03-09T18:22:44.162687+0000 mon.c (mon.1) 45 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T18:22:45.252 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:45 vm00 bash[22468]: audit 2026-03-09T18:22:44.164411+0000 mgr.y (mgr.24335) 26 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T18:22:45.252 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:45 vm00 bash[22468]: audit 2026-03-09T18:22:44.178987+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:45.252 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:45 vm00 bash[22468]: cephadm 2026-03-09T18:22:44.204645+0000 mgr.y (mgr.24335) 27 : cephadm [INF] Deploying daemon grafana.a on vm08 2026-03-09T18:22:45.252 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:45 vm00 bash[22468]: cluster 2026-03-09T18:22:44.426144+0000 mgr.y (mgr.24335) 28 : cluster [DBG] pgmap v11: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:45.413 INFO:teuthology.orchestra.run.vm00.stdout:154618822671 2026-03-09T18:22:45.413 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph osd last-stat-seq osd.5 2026-03-09T18:22:45.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:45 vm08 bash[17774]: audit 2026-03-09T18:22:44.100819+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:45.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:45 vm08 bash[17774]: audit 2026-03-09T18:22:44.143481+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:45.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:45 vm08 bash[17774]: audit 2026-03-09T18:22:44.156180+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:45.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:45 vm08 bash[17774]: audit 2026-03-09T18:22:44.162687+0000 mon.c (mon.1) 45 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T18:22:45.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:45 vm08 bash[17774]: audit 2026-03-09T18:22:44.164411+0000 mgr.y (mgr.24335) 26 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T18:22:45.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:45 vm08 bash[17774]: audit 2026-03-09T18:22:44.178987+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:45.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:45 vm08 bash[17774]: cephadm 2026-03-09T18:22:44.204645+0000 mgr.y (mgr.24335) 27 : cephadm [INF] Deploying daemon grafana.a on vm08 2026-03-09T18:22:45.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:45 vm08 bash[17774]: cluster 2026-03-09T18:22:44.426144+0000 mgr.y (mgr.24335) 28 : cluster [DBG] pgmap v11: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:45.527 INFO:teuthology.orchestra.run.vm00.stdout:34359738397 2026-03-09T18:22:45.527 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph osd last-stat-seq osd.0 2026-03-09T18:22:45.562 INFO:teuthology.orchestra.run.vm00.stdout:107374182421 2026-03-09T18:22:45.562 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph osd last-stat-seq osd.3 2026-03-09T18:22:45.751 INFO:teuthology.orchestra.run.vm00.stdout:55834574875 2026-03-09T18:22:45.751 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph osd last-stat-seq osd.1 2026-03-09T18:22:45.824 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:45 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:22:45] "GET /metrics HTTP/1.1" 200 191100 "" "Prometheus/2.33.4" 2026-03-09T18:22:45.897 INFO:teuthology.orchestra.run.vm00.stdout:206158430215 2026-03-09T18:22:45.897 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph osd last-stat-seq osd.7 2026-03-09T18:22:45.928 INFO:teuthology.orchestra.run.vm00.stdout:77309411351 2026-03-09T18:22:45.928 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph osd last-stat-seq osd.2 2026-03-09T18:22:45.942 INFO:teuthology.orchestra.run.vm00.stdout:180388626442 2026-03-09T18:22:45.956 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph osd last-stat-seq osd.6 2026-03-09T18:22:45.957 INFO:teuthology.orchestra.run.vm00.stdout:128849018897 2026-03-09T18:22:45.957 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph osd last-stat-seq osd.4 2026-03-09T18:22:46.634 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:22:46 vm00 bash[38226]: level=info ts=2026-03-09T18:22:46.270Z caller=cluster.go:696 component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000159467s 2026-03-09T18:22:47.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:47 vm00 bash[22468]: cluster 2026-03-09T18:22:46.426505+0000 mgr.y (mgr.24335) 29 : cluster [DBG] pgmap v12: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:47.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:47 vm00 bash[17468]: cluster 2026-03-09T18:22:46.426505+0000 mgr.y (mgr.24335) 29 : cluster [DBG] pgmap v12: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:47.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:47 vm08 bash[17774]: cluster 2026-03-09T18:22:46.426505+0000 mgr.y (mgr.24335) 29 : cluster [DBG] pgmap v12: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:48.293 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/mon.c/config 2026-03-09T18:22:48.294 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/mon.c/config 2026-03-09T18:22:48.294 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/mon.c/config 2026-03-09T18:22:48.296 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/mon.c/config 2026-03-09T18:22:48.298 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/mon.c/config 2026-03-09T18:22:48.298 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/mon.c/config 2026-03-09T18:22:48.301 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/mon.c/config 2026-03-09T18:22:48.303 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/mon.c/config 2026-03-09T18:22:48.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:48 vm08 bash[18535]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:22:47] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:22:49.231 INFO:teuthology.orchestra.run.vm00.stdout:55834574875 2026-03-09T18:22:49.503 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:49 vm00 bash[17468]: cluster 2026-03-09T18:22:48.426818+0000 mgr.y (mgr.24335) 30 : cluster [DBG] pgmap v13: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:49.503 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:49 vm00 bash[17468]: audit 2026-03-09T18:22:48.492244+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:49.504 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:49 vm00 bash[17468]: audit 2026-03-09T18:22:49.209018+0000 mon.c (mon.1) 46 : audit [DBG] from='client.? 192.168.123.100:0/3753650516' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T18:22:49.564 INFO:tasks.cephadm.ceph_manager.ceph:need seq 55834574875 got 55834574875 for osd.1 2026-03-09T18:22:49.564 DEBUG:teuthology.parallel:result is None 2026-03-09T18:22:49.593 INFO:teuthology.orchestra.run.vm00.stdout:107374182421 2026-03-09T18:22:49.636 INFO:teuthology.orchestra.run.vm00.stdout:128849018897 2026-03-09T18:22:49.763 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:49 vm00 bash[22468]: cluster 2026-03-09T18:22:48.426818+0000 mgr.y (mgr.24335) 30 : cluster [DBG] pgmap v13: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:49.763 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:49 vm00 bash[22468]: audit 2026-03-09T18:22:48.492244+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:49.763 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:49 vm00 bash[22468]: audit 2026-03-09T18:22:49.209018+0000 mon.c (mon.1) 46 : audit [DBG] from='client.? 192.168.123.100:0/3753650516' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T18:22:49.770 INFO:tasks.cephadm.ceph_manager.ceph:need seq 107374182421 got 107374182421 for osd.3 2026-03-09T18:22:49.770 DEBUG:teuthology.parallel:result is None 2026-03-09T18:22:49.796 INFO:teuthology.orchestra.run.vm00.stdout:34359738397 2026-03-09T18:22:49.832 INFO:tasks.cephadm.ceph_manager.ceph:need seq 128849018897 got 128849018897 for osd.4 2026-03-09T18:22:49.832 DEBUG:teuthology.parallel:result is None 2026-03-09T18:22:49.926 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738397 got 34359738397 for osd.0 2026-03-09T18:22:49.927 DEBUG:teuthology.parallel:result is None 2026-03-09T18:22:49.963 INFO:teuthology.orchestra.run.vm00.stdout:180388626442 2026-03-09T18:22:49.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:49 vm08 bash[17774]: cluster 2026-03-09T18:22:48.426818+0000 mgr.y (mgr.24335) 30 : cluster [DBG] pgmap v13: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:49.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:49 vm08 bash[17774]: audit 2026-03-09T18:22:48.492244+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:49.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:49 vm08 bash[17774]: audit 2026-03-09T18:22:49.209018+0000 mon.c (mon.1) 46 : audit [DBG] from='client.? 192.168.123.100:0/3753650516' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T18:22:50.051 INFO:tasks.cephadm.ceph_manager.ceph:need seq 180388626442 got 180388626442 for osd.6 2026-03-09T18:22:50.051 DEBUG:teuthology.parallel:result is None 2026-03-09T18:22:50.079 INFO:teuthology.orchestra.run.vm00.stdout:206158430215 2026-03-09T18:22:50.086 INFO:teuthology.orchestra.run.vm00.stdout:154618822671 2026-03-09T18:22:50.148 INFO:tasks.cephadm.ceph_manager.ceph:need seq 154618822671 got 154618822671 for osd.5 2026-03-09T18:22:50.149 DEBUG:teuthology.parallel:result is None 2026-03-09T18:22:50.172 INFO:teuthology.orchestra.run.vm00.stdout:77309411351 2026-03-09T18:22:50.184 INFO:tasks.cephadm.ceph_manager.ceph:need seq 206158430215 got 206158430215 for osd.7 2026-03-09T18:22:50.184 DEBUG:teuthology.parallel:result is None 2026-03-09T18:22:50.239 INFO:tasks.cephadm.ceph_manager.ceph:need seq 77309411351 got 77309411351 for osd.2 2026-03-09T18:22:50.239 DEBUG:teuthology.parallel:result is None 2026-03-09T18:22:50.240 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-09T18:22:50.240 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph pg dump --format=json 2026-03-09T18:22:50.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:50 vm00 bash[22468]: audit 2026-03-09T18:22:49.591032+0000 mon.c (mon.1) 47 : audit [DBG] from='client.? 192.168.123.100:0/2465234686' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T18:22:50.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:50 vm00 bash[22468]: audit 2026-03-09T18:22:49.633332+0000 mon.b (mon.2) 30 : audit [DBG] from='client.? 192.168.123.100:0/3229547606' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T18:22:50.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:50 vm00 bash[22468]: audit 2026-03-09T18:22:49.796049+0000 mon.b (mon.2) 31 : audit [DBG] from='client.? 192.168.123.100:0/3900902875' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T18:22:50.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:50 vm00 bash[22468]: audit 2026-03-09T18:22:49.958259+0000 mon.c (mon.1) 48 : audit [DBG] from='client.? 192.168.123.100:0/2857583031' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T18:22:50.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:50 vm00 bash[22468]: audit 2026-03-09T18:22:50.067255+0000 mon.b (mon.2) 32 : audit [DBG] from='client.? 192.168.123.100:0/3861346803' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T18:22:50.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:50 vm00 bash[22468]: audit 2026-03-09T18:22:50.083451+0000 mon.a (mon.0) 586 : audit [DBG] from='client.? 192.168.123.100:0/1781614463' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T18:22:50.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:50 vm00 bash[22468]: audit 2026-03-09T18:22:50.166084+0000 mon.c (mon.1) 49 : audit [DBG] from='client.? 192.168.123.100:0/1573222812' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T18:22:50.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:50 vm00 bash[17468]: audit 2026-03-09T18:22:49.591032+0000 mon.c (mon.1) 47 : audit [DBG] from='client.? 192.168.123.100:0/2465234686' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T18:22:50.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:50 vm00 bash[17468]: audit 2026-03-09T18:22:49.633332+0000 mon.b (mon.2) 30 : audit [DBG] from='client.? 192.168.123.100:0/3229547606' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T18:22:50.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:50 vm00 bash[17468]: audit 2026-03-09T18:22:49.796049+0000 mon.b (mon.2) 31 : audit [DBG] from='client.? 192.168.123.100:0/3900902875' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T18:22:50.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:50 vm00 bash[17468]: audit 2026-03-09T18:22:49.958259+0000 mon.c (mon.1) 48 : audit [DBG] from='client.? 192.168.123.100:0/2857583031' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T18:22:50.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:50 vm00 bash[17468]: audit 2026-03-09T18:22:50.067255+0000 mon.b (mon.2) 32 : audit [DBG] from='client.? 192.168.123.100:0/3861346803' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T18:22:50.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:50 vm00 bash[17468]: audit 2026-03-09T18:22:50.083451+0000 mon.a (mon.0) 586 : audit [DBG] from='client.? 192.168.123.100:0/1781614463' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T18:22:50.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:50 vm00 bash[17468]: audit 2026-03-09T18:22:50.166084+0000 mon.c (mon.1) 49 : audit [DBG] from='client.? 192.168.123.100:0/1573222812' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T18:22:50.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:50 vm08 bash[17774]: audit 2026-03-09T18:22:49.591032+0000 mon.c (mon.1) 47 : audit [DBG] from='client.? 192.168.123.100:0/2465234686' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T18:22:50.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:50 vm08 bash[17774]: audit 2026-03-09T18:22:49.633332+0000 mon.b (mon.2) 30 : audit [DBG] from='client.? 192.168.123.100:0/3229547606' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T18:22:50.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:50 vm08 bash[17774]: audit 2026-03-09T18:22:49.796049+0000 mon.b (mon.2) 31 : audit [DBG] from='client.? 192.168.123.100:0/3900902875' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T18:22:50.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:50 vm08 bash[17774]: audit 2026-03-09T18:22:49.958259+0000 mon.c (mon.1) 48 : audit [DBG] from='client.? 192.168.123.100:0/2857583031' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T18:22:50.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:50 vm08 bash[17774]: audit 2026-03-09T18:22:50.067255+0000 mon.b (mon.2) 32 : audit [DBG] from='client.? 192.168.123.100:0/3861346803' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T18:22:50.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:50 vm08 bash[17774]: audit 2026-03-09T18:22:50.083451+0000 mon.a (mon.0) 586 : audit [DBG] from='client.? 192.168.123.100:0/1781614463' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T18:22:50.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:50 vm08 bash[17774]: audit 2026-03-09T18:22:50.166084+0000 mon.c (mon.1) 49 : audit [DBG] from='client.? 192.168.123.100:0/1573222812' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T18:22:51.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:51 vm00 bash[22468]: cluster 2026-03-09T18:22:50.427233+0000 mgr.y (mgr.24335) 31 : cluster [DBG] pgmap v14: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:51.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:51 vm00 bash[17468]: cluster 2026-03-09T18:22:50.427233+0000 mgr.y (mgr.24335) 31 : cluster [DBG] pgmap v14: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:51.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:51 vm08 bash[17774]: cluster 2026-03-09T18:22:50.427233+0000 mgr.y (mgr.24335) 31 : cluster [DBG] pgmap v14: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:52.869 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/mon.c/config 2026-03-09T18:22:53.201 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:22:53.204 INFO:teuthology.orchestra.run.vm00.stderr:dumped all 2026-03-09T18:22:53.282 INFO:teuthology.orchestra.run.vm00.stdout:{"pg_ready":true,"pg_map":{"version":15,"stamp":"2026-03-09T18:22:52.427424+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":192,"num_read_kb":288,"num_write":133,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":397840,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":87,"ondisk_log_size":87,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":8,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":167739392,"kb_used":49864,"kb_used_data":5000,"kb_used_omap":0,"kb_used_meta":44800,"kb_avail":167689528,"statfs":{"total":171765137408,"available":171714076672,"internally_reserved":0,"allocated":5120000,"data_stored":2776555,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":45875200},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.002073"},"pg_stats":[{"pgid":"1.0","version":"51'87","reported_seq":56,"reported_epoch":51,"state":"active+clean","last_fresh":"2026-03-09T18:22:28.570903+0000","last_change":"2026-03-09T18:22:18.552673+0000","last_active":"2026-03-09T18:22:28.570903+0000","last_peered":"2026-03-09T18:22:28.570903+0000","last_clean":"2026-03-09T18:22:28.570903+0000","last_became_active":"2026-03-09T18:22:18.245326+0000","last_became_peered":"2026-03-09T18:22:18.245326+0000","last_unstale":"2026-03-09T18:22:28.570903+0000","last_undegraded":"2026-03-09T18:22:28.570903+0000","last_fullsized":"2026-03-09T18:22:28.570903+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":19,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:20:56.568905+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:20:56.568905+0000","last_clean_scrub_stamp":"2026-03-09T18:20:56.568905+0000","objects_scrubbed":0,"log_size":87,"ondisk_log_size":87,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:04:38.360561+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":192,"num_read_kb":288,"num_write":133,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":397840,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":192,"num_read_kb":288,"num_write":133,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":397840,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1204224,"data_stored":1193520,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":87,"ondisk_log_size":87,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":7,"up_from":48,"seq":206158430216,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":6188,"kb_used_data":868,"kb_used_omap":0,"kb_used_meta":5312,"kb_avail":20961236,"statfs":{"total":21470642176,"available":21464305664,"internally_reserved":0,"allocated":888832,"data_stored":595553,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5439488},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.78900000000000003}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.85199999999999998}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.93300000000000005}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.86899999999999999}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.83699999999999997}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.77000000000000002}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.74199999999999999}]}]},{"osd":6,"up_from":42,"seq":180388626443,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":6252,"kb_used_data":868,"kb_used_omap":0,"kb_used_meta":5376,"kb_avail":20961172,"statfs":{"total":21470642176,"available":21464240128,"internally_reserved":0,"allocated":888832,"data_stored":595553,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5505024},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.88800000000000001}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.91200000000000003}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.71799999999999997}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.89900000000000002}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":1.024}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.80400000000000005}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.73599999999999999}]}]},{"osd":1,"up_from":13,"seq":55834574876,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":6436,"kb_used_data":476,"kb_used_omap":0,"kb_used_meta":5952,"kb_avail":20960988,"statfs":{"total":21470642176,"available":21464051712,"internally_reserved":0,"allocated":487424,"data_stored":197784,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":6094848},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Mon Mar 9 18:22:45 2026","interfaces":[{"interface":"back","average":{"1min":0.72099999999999997,"5min":0.63700000000000001,"15min":0.623},"min":{"1min":0.28899999999999998,"5min":0.28699999999999998,"15min":0.28699999999999998},"max":{"1min":1.423,"5min":3.335,"15min":3.335},"last":0.255},{"interface":"front","average":{"1min":0.77700000000000002,"5min":0.59499999999999997,"15min":0.56499999999999995},"min":{"1min":0.307,"5min":0.251,"15min":0.251},"max":{"1min":1.5109999999999999,"5min":1.681,"15min":1.681},"last":0.28499999999999998}]},{"osd":2,"last update":"Mon Mar 9 18:21:55 2026","interfaces":[{"interface":"back","average":{"1min":0.59899999999999998,"5min":0.59899999999999998,"15min":0.59899999999999998},"min":{"1min":0.29699999999999999,"5min":0.29699999999999999,"15min":0.29699999999999999},"max":{"1min":1.7110000000000001,"5min":1.7110000000000001,"15min":1.7110000000000001},"last":0.42999999999999999},{"interface":"front","average":{"1min":0.63200000000000001,"5min":0.63200000000000001,"15min":0.63200000000000001},"min":{"1min":0.40200000000000002,"5min":0.40200000000000002,"15min":0.40200000000000002},"max":{"1min":1.5229999999999999,"5min":1.5229999999999999,"15min":1.5229999999999999},"last":0.46600000000000003}]},{"osd":3,"last update":"Mon Mar 9 18:22:15 2026","interfaces":[{"interface":"back","average":{"1min":0.624,"5min":0.624,"15min":0.624},"min":{"1min":0.41799999999999998,"5min":0.41799999999999998,"15min":0.41799999999999998},"max":{"1min":0.95999999999999996,"5min":0.95999999999999996,"15min":0.95999999999999996},"last":0.55300000000000005},{"interface":"front","average":{"1min":0.63100000000000001,"5min":0.63100000000000001,"15min":0.63100000000000001},"min":{"1min":0.36399999999999999,"5min":0.36399999999999999,"15min":0.36399999999999999},"max":{"1min":1.2250000000000001,"5min":1.2250000000000001,"15min":1.2250000000000001},"last":0.60799999999999998}]},{"osd":4,"last update":"Mon Mar 9 18:22:27 2026","interfaces":[{"interface":"back","average":{"1min":0.73199999999999998,"5min":0.73199999999999998,"15min":0.73199999999999998},"min":{"1min":0.499,"5min":0.499,"15min":0.499},"max":{"1min":1.2090000000000001,"5min":1.2090000000000001,"15min":1.2090000000000001},"last":0.64600000000000002},{"interface":"front","average":{"1min":0.73199999999999998,"5min":0.73199999999999998,"15min":0.73199999999999998},"min":{"1min":0.42199999999999999,"5min":0.42199999999999999,"15min":0.42199999999999999},"max":{"1min":1.2290000000000001,"5min":1.2290000000000001,"15min":1.2290000000000001},"last":0.57599999999999996}]},{"osd":5,"last update":"Mon Mar 9 18:22:45 2026","interfaces":[{"interface":"back","average":{"1min":0.88300000000000001,"5min":0.88300000000000001,"15min":0.88300000000000001},"min":{"1min":0.52100000000000002,"5min":0.52100000000000002,"15min":0.52100000000000002},"max":{"1min":1.494,"5min":1.494,"15min":1.494},"last":0.60099999999999998},{"interface":"front","average":{"1min":0.77000000000000002,"5min":0.77000000000000002,"15min":0.77000000000000002},"min":{"1min":0.48899999999999999,"5min":0.48899999999999999,"15min":0.48899999999999999},"max":{"1min":1.3640000000000001,"5min":1.3640000000000001,"15min":1.3640000000000001},"last":0.63500000000000001}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.77500000000000002}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.68500000000000005}]}]},{"osd":0,"up_from":8,"seq":34359738398,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":6896,"kb_used_data":872,"kb_used_omap":0,"kb_used_meta":6016,"kb_avail":20960528,"statfs":{"total":21470642176,"available":21463580672,"internally_reserved":0,"allocated":892928,"data_stored":595868,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":6160384},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":1,"last update":"Mon Mar 9 18:22:47 2026","interfaces":[{"interface":"back","average":{"1min":0.69599999999999995,"5min":0.68100000000000005,"15min":0.67900000000000005},"min":{"1min":0.29599999999999999,"5min":0.22900000000000001,"15min":0.22900000000000001},"max":{"1min":1.589,"5min":3.9889999999999999,"15min":3.9889999999999999},"last":0.72799999999999998},{"interface":"front","average":{"1min":0.68000000000000005,"5min":0.47599999999999998,"15min":0.442},"min":{"1min":0.41099999999999998,"5min":0.23400000000000001,"15min":0.23400000000000001},"max":{"1min":1.5089999999999999,"5min":1.5089999999999999,"15min":1.5089999999999999},"last":0.35099999999999998}]},{"osd":2,"last update":"Mon Mar 9 18:21:59 2026","interfaces":[{"interface":"back","average":{"1min":0.52900000000000003,"5min":0.52900000000000003,"15min":0.52900000000000003},"min":{"1min":0.28799999999999998,"5min":0.28799999999999998,"15min":0.28799999999999998},"max":{"1min":0.97199999999999998,"5min":0.97199999999999998,"15min":0.97199999999999998},"last":0.45500000000000002},{"interface":"front","average":{"1min":0.55000000000000004,"5min":0.55000000000000004,"15min":0.55000000000000004},"min":{"1min":0.35399999999999998,"5min":0.35399999999999998,"15min":0.35399999999999998},"max":{"1min":0.88700000000000001,"5min":0.88700000000000001,"15min":0.88700000000000001},"last":0.60099999999999998}]},{"osd":3,"last update":"Mon Mar 9 18:22:13 2026","interfaces":[{"interface":"back","average":{"1min":0.57699999999999996,"5min":0.57699999999999996,"15min":0.57699999999999996},"min":{"1min":0.26700000000000002,"5min":0.26700000000000002,"15min":0.26700000000000002},"max":{"1min":1.22,"5min":1.22,"15min":1.22},"last":1.1040000000000001},{"interface":"front","average":{"1min":0.57699999999999996,"5min":0.57699999999999996,"15min":0.57699999999999996},"min":{"1min":0.41199999999999998,"5min":0.41199999999999998,"15min":0.41199999999999998},"max":{"1min":0.83499999999999996,"5min":0.83499999999999996,"15min":0.83499999999999996},"last":1.157}]},{"osd":4,"last update":"Mon Mar 9 18:22:26 2026","interfaces":[{"interface":"back","average":{"1min":0.85299999999999998,"5min":0.85299999999999998,"15min":0.85299999999999998},"min":{"1min":0.51600000000000001,"5min":0.51600000000000001,"15min":0.51600000000000001},"max":{"1min":2.6000000000000001,"5min":2.6000000000000001,"15min":2.6000000000000001},"last":0.48699999999999999},{"interface":"front","average":{"1min":0.82599999999999996,"5min":0.82599999999999996,"15min":0.82599999999999996},"min":{"1min":0.39100000000000001,"5min":0.39100000000000001,"15min":0.39100000000000001},"max":{"1min":2.5790000000000002,"5min":2.5790000000000002,"15min":2.5790000000000002},"last":0.47399999999999998}]},{"osd":5,"last update":"Mon Mar 9 18:22:47 2026","interfaces":[{"interface":"back","average":{"1min":0.81599999999999995,"5min":0.81599999999999995,"15min":0.81599999999999995},"min":{"1min":0.49399999999999999,"5min":0.49399999999999999,"15min":0.49399999999999999},"max":{"1min":2.6669999999999998,"5min":2.6669999999999998,"15min":2.6669999999999998},"last":0.46500000000000002},{"interface":"front","average":{"1min":0.82299999999999995,"5min":0.82299999999999995,"15min":0.82299999999999995},"min":{"1min":0.48699999999999999,"5min":0.48699999999999999,"15min":0.48699999999999999},"max":{"1min":1.9630000000000001,"5min":1.9630000000000001,"15min":1.9630000000000001},"last":1.149}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.52600000000000002}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":1.093}]}]},{"osd":2,"up_from":18,"seq":77309411352,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":6376,"kb_used_data":480,"kb_used_omap":0,"kb_used_meta":5888,"kb_avail":20961048,"statfs":{"total":21470642176,"available":21464113152,"internally_reserved":0,"allocated":491520,"data_stored":198028,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":6029312},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Mon Mar 9 18:22:00 2026","interfaces":[{"interface":"back","average":{"1min":0.55800000000000005,"5min":0.55800000000000005,"15min":0.55800000000000005},"min":{"1min":0.315,"5min":0.315,"15min":0.315},"max":{"1min":1.1359999999999999,"5min":1.1359999999999999,"15min":1.1359999999999999},"last":0.96799999999999997},{"interface":"front","average":{"1min":0.498,"5min":0.498,"15min":0.498},"min":{"1min":0.31,"5min":0.31,"15min":0.31},"max":{"1min":0.74099999999999999,"5min":0.74099999999999999,"15min":0.74099999999999999},"last":0.35699999999999998}]},{"osd":1,"last update":"Mon Mar 9 18:22:00 2026","interfaces":[{"interface":"back","average":{"1min":0.58199999999999996,"5min":0.58199999999999996,"15min":0.58199999999999996},"min":{"1min":0.318,"5min":0.318,"15min":0.318},"max":{"1min":1.2310000000000001,"5min":1.2310000000000001,"15min":1.2310000000000001},"last":0.89400000000000002},{"interface":"front","average":{"1min":0.56799999999999995,"5min":0.56799999999999995,"15min":0.56799999999999995},"min":{"1min":0.28000000000000003,"5min":0.28000000000000003,"15min":0.28000000000000003},"max":{"1min":1.119,"5min":1.119,"15min":1.119},"last":0.93999999999999995}]},{"osd":3,"last update":"Mon Mar 9 18:22:14 2026","interfaces":[{"interface":"back","average":{"1min":0.59299999999999997,"5min":0.59299999999999997,"15min":0.59299999999999997},"min":{"1min":0.38400000000000001,"5min":0.38400000000000001,"15min":0.38400000000000001},"max":{"1min":1.077,"5min":1.077,"15min":1.077},"last":0.90400000000000003},{"interface":"front","average":{"1min":0.65000000000000002,"5min":0.65000000000000002,"15min":0.65000000000000002},"min":{"1min":0.40600000000000003,"5min":0.40600000000000003,"15min":0.40600000000000003},"max":{"1min":1.1850000000000001,"5min":1.1850000000000001,"15min":1.1850000000000001},"last":0.95699999999999996}]},{"osd":4,"last update":"Mon Mar 9 18:22:29 2026","interfaces":[{"interface":"back","average":{"1min":0.68000000000000005,"5min":0.68000000000000005,"15min":0.68000000000000005},"min":{"1min":0.38100000000000001,"5min":0.38100000000000001,"15min":0.38100000000000001},"max":{"1min":1.145,"5min":1.145,"15min":1.145},"last":0.82299999999999995},{"interface":"front","average":{"1min":0.71599999999999997,"5min":0.71599999999999997,"15min":0.71599999999999997},"min":{"1min":0.47099999999999997,"5min":0.47099999999999997,"15min":0.47099999999999997},"max":{"1min":1.1599999999999999,"5min":1.1599999999999999,"15min":1.1599999999999999},"last":0.94899999999999995}]},{"osd":5,"last update":"Mon Mar 9 18:22:43 2026","interfaces":[{"interface":"back","average":{"1min":0.73099999999999998,"5min":0.73099999999999998,"15min":0.73099999999999998},"min":{"1min":0.441,"5min":0.441,"15min":0.441},"max":{"1min":1.1759999999999999,"5min":1.1759999999999999,"15min":1.1759999999999999},"last":0.85299999999999998},{"interface":"front","average":{"1min":0.74299999999999999,"5min":0.74299999999999999,"15min":0.74299999999999999},"min":{"1min":0.51400000000000001,"5min":0.51400000000000001,"15min":0.51400000000000001},"max":{"1min":1.1970000000000001,"5min":1.1970000000000001,"15min":1.1970000000000001},"last":0.91200000000000003}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.97699999999999998}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.84099999999999997}]}]},{"osd":3,"up_from":25,"seq":107374182422,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":5928,"kb_used_data":480,"kb_used_omap":0,"kb_used_meta":5440,"kb_avail":20961496,"statfs":{"total":21470642176,"available":21464571904,"internally_reserved":0,"allocated":491520,"data_stored":198028,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5570560},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Mon Mar 9 18:22:12 2026","interfaces":[{"interface":"back","average":{"1min":0.56299999999999994,"5min":0.56299999999999994,"15min":0.56299999999999994},"min":{"1min":0.28799999999999998,"5min":0.28799999999999998,"15min":0.28799999999999998},"max":{"1min":0.79000000000000004,"5min":0.79000000000000004,"15min":0.79000000000000004},"last":0.47799999999999998},{"interface":"front","average":{"1min":0.55600000000000005,"5min":0.55600000000000005,"15min":0.55600000000000005},"min":{"1min":0.316,"5min":0.316,"15min":0.316},"max":{"1min":0.86299999999999999,"5min":0.86299999999999999,"15min":0.86299999999999999},"last":1.0089999999999999}]},{"osd":1,"last update":"Mon Mar 9 18:22:12 2026","interfaces":[{"interface":"back","average":{"1min":0.59899999999999998,"5min":0.59899999999999998,"15min":0.59899999999999998},"min":{"1min":0.26000000000000001,"5min":0.26000000000000001,"15min":0.26000000000000001},"max":{"1min":1.2090000000000001,"5min":1.2090000000000001,"15min":1.2090000000000001},"last":0.99399999999999999},{"interface":"front","average":{"1min":0.60799999999999998,"5min":0.60799999999999998,"15min":0.60799999999999998},"min":{"1min":0.28100000000000003,"5min":0.28100000000000003,"15min":0.28100000000000003},"max":{"1min":1.095,"5min":1.095,"15min":1.095},"last":0.38400000000000001}]},{"osd":2,"last update":"Mon Mar 9 18:22:12 2026","interfaces":[{"interface":"back","average":{"1min":0.61899999999999999,"5min":0.61899999999999999,"15min":0.61899999999999999},"min":{"1min":0.307,"5min":0.307,"15min":0.307},"max":{"1min":0.90100000000000002,"5min":0.90100000000000002,"15min":0.90100000000000002},"last":0.36499999999999999},{"interface":"front","average":{"1min":0.57599999999999996,"5min":0.57599999999999996,"15min":0.57599999999999996},"min":{"1min":0.28199999999999997,"5min":0.28199999999999997,"15min":0.28199999999999997},"max":{"1min":0.78600000000000003,"5min":0.78600000000000003,"15min":0.78600000000000003},"last":0.81399999999999995}]},{"osd":4,"last update":"Mon Mar 9 18:22:30 2026","interfaces":[{"interface":"back","average":{"1min":0.77100000000000002,"5min":0.77100000000000002,"15min":0.77100000000000002},"min":{"1min":0.48599999999999999,"5min":0.48599999999999999,"15min":0.48599999999999999},"max":{"1min":2.6269999999999998,"5min":2.6269999999999998,"15min":2.6269999999999998},"last":0.95299999999999996},{"interface":"front","average":{"1min":0.84299999999999997,"5min":0.84299999999999997,"15min":0.84299999999999997},"min":{"1min":0.46999999999999997,"5min":0.46999999999999997,"15min":0.46999999999999997},"max":{"1min":2.6890000000000001,"5min":2.6890000000000001,"15min":2.6890000000000001},"last":0.75900000000000001}]},{"osd":5,"last update":"Mon Mar 9 18:22:47 2026","interfaces":[{"interface":"back","average":{"1min":0.872,"5min":0.872,"15min":0.872},"min":{"1min":0.48699999999999999,"5min":0.48699999999999999,"15min":0.48699999999999999},"max":{"1min":2.8260000000000001,"5min":2.8260000000000001,"15min":2.8260000000000001},"last":0.95099999999999996},{"interface":"front","average":{"1min":0.91900000000000004,"5min":0.91900000000000004,"15min":0.91900000000000004},"min":{"1min":0.55200000000000005,"5min":0.55200000000000005,"15min":0.55200000000000005},"max":{"1min":2.8780000000000001,"5min":2.8780000000000001,"15min":2.8780000000000001},"last":1.046}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.74299999999999999}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.84099999999999997}]}]},{"osd":4,"up_from":30,"seq":128849018898,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":5928,"kb_used_data":480,"kb_used_omap":0,"kb_used_meta":5440,"kb_avail":20961496,"statfs":{"total":21470642176,"available":21464571904,"internally_reserved":0,"allocated":491520,"data_stored":198028,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5570560},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Mon Mar 9 18:22:27 2026","interfaces":[{"interface":"back","average":{"1min":0.67600000000000005,"5min":0.67600000000000005,"15min":0.67600000000000005},"min":{"1min":0.42799999999999999,"5min":0.42799999999999999,"15min":0.42799999999999999},"max":{"1min":0.995,"5min":0.995,"15min":0.995},"last":0.76100000000000001},{"interface":"front","average":{"1min":0.60099999999999998,"5min":0.60099999999999998,"15min":0.60099999999999998},"min":{"1min":0.443,"5min":0.443,"15min":0.443},"max":{"1min":0.95499999999999996,"5min":0.95499999999999996,"15min":0.95499999999999996},"last":0.70499999999999996}]},{"osd":1,"last update":"Mon Mar 9 18:22:27 2026","interfaces":[{"interface":"back","average":{"1min":0.69699999999999995,"5min":0.69699999999999995,"15min":0.69699999999999995},"min":{"1min":0.48299999999999998,"5min":0.48299999999999998,"15min":0.48299999999999998},"max":{"1min":1.0209999999999999,"5min":1.0209999999999999,"15min":1.0209999999999999},"last":0.73399999999999999},{"interface":"front","average":{"1min":0.73999999999999999,"5min":0.73999999999999999,"15min":0.73999999999999999},"min":{"1min":0.504,"5min":0.504,"15min":0.504},"max":{"1min":1.5409999999999999,"5min":1.5409999999999999,"15min":1.5409999999999999},"last":0.69599999999999995}]},{"osd":2,"last update":"Mon Mar 9 18:22:27 2026","interfaces":[{"interface":"back","average":{"1min":0.77900000000000003,"5min":0.77900000000000003,"15min":0.77900000000000003},"min":{"1min":0.40999999999999998,"5min":0.40999999999999998,"15min":0.40999999999999998},"max":{"1min":1.5289999999999999,"5min":1.5289999999999999,"15min":1.5289999999999999},"last":0.66600000000000004},{"interface":"front","average":{"1min":0.76800000000000002,"5min":0.76800000000000002,"15min":0.76800000000000002},"min":{"1min":0.45000000000000001,"5min":0.45000000000000001,"15min":0.45000000000000001},"max":{"1min":1.899,"5min":1.899,"15min":1.899},"last":0.77600000000000002}]},{"osd":3,"last update":"Mon Mar 9 18:22:27 2026","interfaces":[{"interface":"back","average":{"1min":0.72099999999999997,"5min":0.72099999999999997,"15min":0.72099999999999997},"min":{"1min":0.39900000000000002,"5min":0.39900000000000002,"15min":0.39900000000000002},"max":{"1min":1.242,"5min":1.242,"15min":1.242},"last":0.79600000000000004},{"interface":"front","average":{"1min":0.751,"5min":0.751,"15min":0.751},"min":{"1min":0.44,"5min":0.44,"15min":0.44},"max":{"1min":1.0580000000000001,"5min":1.0580000000000001,"15min":1.0580000000000001},"last":0.748}]},{"osd":5,"last update":"Mon Mar 9 18:22:44 2026","interfaces":[{"interface":"back","average":{"1min":0.64900000000000002,"5min":0.64900000000000002,"15min":0.64900000000000002},"min":{"1min":0.432,"5min":0.432,"15min":0.432},"max":{"1min":1.0529999999999999,"5min":1.0529999999999999,"15min":1.0529999999999999},"last":0.74099999999999999},{"interface":"front","average":{"1min":0.71799999999999997,"5min":0.71799999999999997,"15min":0.71799999999999997},"min":{"1min":0.38900000000000001,"5min":0.38900000000000001,"15min":0.38900000000000001},"max":{"1min":1.9259999999999999,"5min":1.9259999999999999,"15min":1.9259999999999999},"last":0.80500000000000005}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.72099999999999997}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.81200000000000006}]}]},{"osd":5,"up_from":36,"seq":154618822672,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":5860,"kb_used_data":476,"kb_used_omap":0,"kb_used_meta":5376,"kb_avail":20961564,"statfs":{"total":21470642176,"available":21464641536,"internally_reserved":0,"allocated":487424,"data_stored":197713,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5505024},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Mon Mar 9 18:22:45 2026","interfaces":[{"interface":"back","average":{"1min":0.65900000000000003,"5min":0.65900000000000003,"15min":0.65900000000000003},"min":{"1min":0.40000000000000002,"5min":0.40000000000000002,"15min":0.40000000000000002},"max":{"1min":0.94099999999999995,"5min":0.94099999999999995,"15min":0.94099999999999995},"last":0.69499999999999995},{"interface":"front","average":{"1min":0.69399999999999995,"5min":0.69399999999999995,"15min":0.69399999999999995},"min":{"1min":0.41999999999999998,"5min":0.41999999999999998,"15min":0.41999999999999998},"max":{"1min":0.90100000000000002,"5min":0.90100000000000002,"15min":0.90100000000000002},"last":0.54300000000000004}]},{"osd":1,"last update":"Mon Mar 9 18:22:45 2026","interfaces":[{"interface":"back","average":{"1min":0.70399999999999996,"5min":0.70399999999999996,"15min":0.70399999999999996},"min":{"1min":0.434,"5min":0.434,"15min":0.434},"max":{"1min":1.0109999999999999,"5min":1.0109999999999999,"15min":1.0109999999999999},"last":0.72199999999999998},{"interface":"front","average":{"1min":0.71999999999999997,"5min":0.71999999999999997,"15min":0.71999999999999997},"min":{"1min":0.40999999999999998,"5min":0.40999999999999998,"15min":0.40999999999999998},"max":{"1min":0.96699999999999997,"5min":0.96699999999999997,"15min":0.96699999999999997},"last":0.96699999999999997}]},{"osd":2,"last update":"Mon Mar 9 18:22:45 2026","interfaces":[{"interface":"back","average":{"1min":0.71899999999999997,"5min":0.71899999999999997,"15min":0.71899999999999997},"min":{"1min":0.436,"5min":0.436,"15min":0.436},"max":{"1min":1.1240000000000001,"5min":1.1240000000000001,"15min":1.1240000000000001},"last":1.1240000000000001},{"interface":"front","average":{"1min":0.73799999999999999,"5min":0.73799999999999999,"15min":0.73799999999999999},"min":{"1min":0.46500000000000002,"5min":0.46500000000000002,"15min":0.46500000000000002},"max":{"1min":1.077,"5min":1.077,"15min":1.077},"last":1.077}]},{"osd":3,"last update":"Mon Mar 9 18:22:45 2026","interfaces":[{"interface":"back","average":{"1min":0.73599999999999999,"5min":0.73599999999999999,"15min":0.73599999999999999},"min":{"1min":0.50700000000000001,"5min":0.50700000000000001,"15min":0.50700000000000001},"max":{"1min":1.0489999999999999,"5min":1.0489999999999999,"15min":1.0489999999999999},"last":0.94699999999999995},{"interface":"front","average":{"1min":0.746,"5min":0.746,"15min":0.746},"min":{"1min":0.38700000000000001,"5min":0.38700000000000001,"15min":0.38700000000000001},"max":{"1min":1.0229999999999999,"5min":1.0229999999999999,"15min":1.0229999999999999},"last":0.94199999999999995}]},{"osd":4,"last update":"Mon Mar 9 18:22:45 2026","interfaces":[{"interface":"back","average":{"1min":0.66400000000000003,"5min":0.66400000000000003,"15min":0.66400000000000003},"min":{"1min":0.36799999999999999,"5min":0.36799999999999999,"15min":0.36799999999999999},"max":{"1min":0.95499999999999996,"5min":0.95499999999999996,"15min":0.95499999999999996},"last":0.60699999999999998},{"interface":"front","average":{"1min":0.64400000000000002,"5min":0.64400000000000002,"15min":0.64400000000000002},"min":{"1min":0.39600000000000002,"5min":0.39600000000000002,"15min":0.39600000000000002},"max":{"1min":1.0269999999999999,"5min":1.0269999999999999,"15min":1.0269999999999999},"last":0.92800000000000005}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.81000000000000005}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.52300000000000002}]}]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-09T18:22:53.282 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph pg dump --format=json 2026-03-09T18:22:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:53 vm00 bash[17468]: cluster 2026-03-09T18:22:52.427581+0000 mgr.y (mgr.24335) 32 : cluster [DBG] pgmap v15: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:53.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:53 vm00 bash[22468]: cluster 2026-03-09T18:22:52.427581+0000 mgr.y (mgr.24335) 32 : cluster [DBG] pgmap v15: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:53.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:53 vm08 bash[17774]: cluster 2026-03-09T18:22:52.427581+0000 mgr.y (mgr.24335) 32 : cluster [DBG] pgmap v15: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:54.534 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:22:54 vm00 bash[38226]: level=info ts=2026-03-09T18:22:54.273Z caller=cluster.go:688 component=cluster msg="gossip settled; proceeding" elapsed=10.003162227s 2026-03-09T18:22:54.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:54 vm00 bash[22468]: audit 2026-03-09T18:22:53.198499+0000 mgr.y (mgr.24335) 33 : audit [DBG] from='client.24425 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:22:54.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:54 vm00 bash[17468]: audit 2026-03-09T18:22:53.198499+0000 mgr.y (mgr.24335) 33 : audit [DBG] from='client.24425 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:22:54.976 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:54 vm08 bash[17774]: audit 2026-03-09T18:22:53.198499+0000 mgr.y (mgr.24335) 33 : audit [DBG] from='client.24425 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:22:55.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:55 vm00 bash[17468]: cluster 2026-03-09T18:22:54.428075+0000 mgr.y (mgr.24335) 34 : cluster [DBG] pgmap v16: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:55.885 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:22:55 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:22:55] "GET /metrics HTTP/1.1" 200 191100 "" "Prometheus/2.33.4" 2026-03-09T18:22:55.885 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:55 vm00 bash[22468]: cluster 2026-03-09T18:22:54.428075+0000 mgr.y (mgr.24335) 34 : cluster [DBG] pgmap v16: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:55.908 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/mon.c/config 2026-03-09T18:22:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:55 vm08 bash[17774]: cluster 2026-03-09T18:22:54.428075+0000 mgr.y (mgr.24335) 34 : cluster [DBG] pgmap v16: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:56.275 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:22:56.278 INFO:teuthology.orchestra.run.vm00.stderr:dumped all 2026-03-09T18:22:56.333 INFO:teuthology.orchestra.run.vm00.stdout:{"pg_ready":true,"pg_map":{"version":16,"stamp":"2026-03-09T18:22:54.427769+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":192,"num_read_kb":288,"num_write":133,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":397840,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":87,"ondisk_log_size":87,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":8,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":167739392,"kb_used":49864,"kb_used_data":5000,"kb_used_omap":0,"kb_used_meta":44800,"kb_avail":167689528,"statfs":{"total":171765137408,"available":171714076672,"internally_reserved":0,"allocated":5120000,"data_stored":2776555,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":45875200},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.002037"},"pg_stats":[{"pgid":"1.0","version":"51'87","reported_seq":56,"reported_epoch":51,"state":"active+clean","last_fresh":"2026-03-09T18:22:28.570903+0000","last_change":"2026-03-09T18:22:18.552673+0000","last_active":"2026-03-09T18:22:28.570903+0000","last_peered":"2026-03-09T18:22:28.570903+0000","last_clean":"2026-03-09T18:22:28.570903+0000","last_became_active":"2026-03-09T18:22:18.245326+0000","last_became_peered":"2026-03-09T18:22:18.245326+0000","last_unstale":"2026-03-09T18:22:28.570903+0000","last_undegraded":"2026-03-09T18:22:28.570903+0000","last_fullsized":"2026-03-09T18:22:28.570903+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":19,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T18:20:56.568905+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T18:20:56.568905+0000","last_clean_scrub_stamp":"2026-03-09T18:20:56.568905+0000","objects_scrubbed":0,"log_size":87,"ondisk_log_size":87,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:04:38.360561+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":192,"num_read_kb":288,"num_write":133,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":397840,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":192,"num_read_kb":288,"num_write":133,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":397840,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1204224,"data_stored":1193520,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":87,"ondisk_log_size":87,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":7,"up_from":48,"seq":206158430216,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":6188,"kb_used_data":868,"kb_used_omap":0,"kb_used_meta":5312,"kb_avail":20961236,"statfs":{"total":21470642176,"available":21464305664,"internally_reserved":0,"allocated":888832,"data_stored":595553,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5439488},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.78900000000000003}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.85199999999999998}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.93300000000000005}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.86899999999999999}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.83699999999999997}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.77000000000000002}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.74199999999999999}]}]},{"osd":6,"up_from":42,"seq":180388626443,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":6252,"kb_used_data":868,"kb_used_omap":0,"kb_used_meta":5376,"kb_avail":20961172,"statfs":{"total":21470642176,"available":21464240128,"internally_reserved":0,"allocated":888832,"data_stored":595553,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5505024},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.88800000000000001}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.91200000000000003}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.71799999999999997}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.89900000000000002}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":1.024}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.80400000000000005}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.73599999999999999}]}]},{"osd":1,"up_from":13,"seq":55834574876,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":6436,"kb_used_data":476,"kb_used_omap":0,"kb_used_meta":5952,"kb_avail":20960988,"statfs":{"total":21470642176,"available":21464051712,"internally_reserved":0,"allocated":487424,"data_stored":197784,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":6094848},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Mon Mar 9 18:22:45 2026","interfaces":[{"interface":"back","average":{"1min":0.72099999999999997,"5min":0.63700000000000001,"15min":0.623},"min":{"1min":0.28899999999999998,"5min":0.28699999999999998,"15min":0.28699999999999998},"max":{"1min":1.423,"5min":3.335,"15min":3.335},"last":0.255},{"interface":"front","average":{"1min":0.77700000000000002,"5min":0.59499999999999997,"15min":0.56499999999999995},"min":{"1min":0.307,"5min":0.251,"15min":0.251},"max":{"1min":1.5109999999999999,"5min":1.681,"15min":1.681},"last":0.28499999999999998}]},{"osd":2,"last update":"Mon Mar 9 18:21:55 2026","interfaces":[{"interface":"back","average":{"1min":0.59899999999999998,"5min":0.59899999999999998,"15min":0.59899999999999998},"min":{"1min":0.29699999999999999,"5min":0.29699999999999999,"15min":0.29699999999999999},"max":{"1min":1.7110000000000001,"5min":1.7110000000000001,"15min":1.7110000000000001},"last":0.42999999999999999},{"interface":"front","average":{"1min":0.63200000000000001,"5min":0.63200000000000001,"15min":0.63200000000000001},"min":{"1min":0.40200000000000002,"5min":0.40200000000000002,"15min":0.40200000000000002},"max":{"1min":1.5229999999999999,"5min":1.5229999999999999,"15min":1.5229999999999999},"last":0.46600000000000003}]},{"osd":3,"last update":"Mon Mar 9 18:22:15 2026","interfaces":[{"interface":"back","average":{"1min":0.624,"5min":0.624,"15min":0.624},"min":{"1min":0.41799999999999998,"5min":0.41799999999999998,"15min":0.41799999999999998},"max":{"1min":0.95999999999999996,"5min":0.95999999999999996,"15min":0.95999999999999996},"last":0.55300000000000005},{"interface":"front","average":{"1min":0.63100000000000001,"5min":0.63100000000000001,"15min":0.63100000000000001},"min":{"1min":0.36399999999999999,"5min":0.36399999999999999,"15min":0.36399999999999999},"max":{"1min":1.2250000000000001,"5min":1.2250000000000001,"15min":1.2250000000000001},"last":0.60799999999999998}]},{"osd":4,"last update":"Mon Mar 9 18:22:27 2026","interfaces":[{"interface":"back","average":{"1min":0.73199999999999998,"5min":0.73199999999999998,"15min":0.73199999999999998},"min":{"1min":0.499,"5min":0.499,"15min":0.499},"max":{"1min":1.2090000000000001,"5min":1.2090000000000001,"15min":1.2090000000000001},"last":0.64600000000000002},{"interface":"front","average":{"1min":0.73199999999999998,"5min":0.73199999999999998,"15min":0.73199999999999998},"min":{"1min":0.42199999999999999,"5min":0.42199999999999999,"15min":0.42199999999999999},"max":{"1min":1.2290000000000001,"5min":1.2290000000000001,"15min":1.2290000000000001},"last":0.57599999999999996}]},{"osd":5,"last update":"Mon Mar 9 18:22:45 2026","interfaces":[{"interface":"back","average":{"1min":0.88300000000000001,"5min":0.88300000000000001,"15min":0.88300000000000001},"min":{"1min":0.52100000000000002,"5min":0.52100000000000002,"15min":0.52100000000000002},"max":{"1min":1.494,"5min":1.494,"15min":1.494},"last":0.60099999999999998},{"interface":"front","average":{"1min":0.77000000000000002,"5min":0.77000000000000002,"15min":0.77000000000000002},"min":{"1min":0.48899999999999999,"5min":0.48899999999999999,"15min":0.48899999999999999},"max":{"1min":1.3640000000000001,"5min":1.3640000000000001,"15min":1.3640000000000001},"last":0.63500000000000001}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.77500000000000002}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.68500000000000005}]}]},{"osd":0,"up_from":8,"seq":34359738398,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":6896,"kb_used_data":872,"kb_used_omap":0,"kb_used_meta":6016,"kb_avail":20960528,"statfs":{"total":21470642176,"available":21463580672,"internally_reserved":0,"allocated":892928,"data_stored":595868,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":6160384},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":1,"last update":"Mon Mar 9 18:22:47 2026","interfaces":[{"interface":"back","average":{"1min":0.69599999999999995,"5min":0.68100000000000005,"15min":0.67900000000000005},"min":{"1min":0.29599999999999999,"5min":0.22900000000000001,"15min":0.22900000000000001},"max":{"1min":1.589,"5min":3.9889999999999999,"15min":3.9889999999999999},"last":0.72799999999999998},{"interface":"front","average":{"1min":0.68000000000000005,"5min":0.47599999999999998,"15min":0.442},"min":{"1min":0.41099999999999998,"5min":0.23400000000000001,"15min":0.23400000000000001},"max":{"1min":1.5089999999999999,"5min":1.5089999999999999,"15min":1.5089999999999999},"last":0.35099999999999998}]},{"osd":2,"last update":"Mon Mar 9 18:21:59 2026","interfaces":[{"interface":"back","average":{"1min":0.52900000000000003,"5min":0.52900000000000003,"15min":0.52900000000000003},"min":{"1min":0.28799999999999998,"5min":0.28799999999999998,"15min":0.28799999999999998},"max":{"1min":0.97199999999999998,"5min":0.97199999999999998,"15min":0.97199999999999998},"last":0.45500000000000002},{"interface":"front","average":{"1min":0.55000000000000004,"5min":0.55000000000000004,"15min":0.55000000000000004},"min":{"1min":0.35399999999999998,"5min":0.35399999999999998,"15min":0.35399999999999998},"max":{"1min":0.88700000000000001,"5min":0.88700000000000001,"15min":0.88700000000000001},"last":0.60099999999999998}]},{"osd":3,"last update":"Mon Mar 9 18:22:13 2026","interfaces":[{"interface":"back","average":{"1min":0.57699999999999996,"5min":0.57699999999999996,"15min":0.57699999999999996},"min":{"1min":0.26700000000000002,"5min":0.26700000000000002,"15min":0.26700000000000002},"max":{"1min":1.22,"5min":1.22,"15min":1.22},"last":1.1040000000000001},{"interface":"front","average":{"1min":0.57699999999999996,"5min":0.57699999999999996,"15min":0.57699999999999996},"min":{"1min":0.41199999999999998,"5min":0.41199999999999998,"15min":0.41199999999999998},"max":{"1min":0.83499999999999996,"5min":0.83499999999999996,"15min":0.83499999999999996},"last":1.157}]},{"osd":4,"last update":"Mon Mar 9 18:22:26 2026","interfaces":[{"interface":"back","average":{"1min":0.85299999999999998,"5min":0.85299999999999998,"15min":0.85299999999999998},"min":{"1min":0.51600000000000001,"5min":0.51600000000000001,"15min":0.51600000000000001},"max":{"1min":2.6000000000000001,"5min":2.6000000000000001,"15min":2.6000000000000001},"last":0.48699999999999999},{"interface":"front","average":{"1min":0.82599999999999996,"5min":0.82599999999999996,"15min":0.82599999999999996},"min":{"1min":0.39100000000000001,"5min":0.39100000000000001,"15min":0.39100000000000001},"max":{"1min":2.5790000000000002,"5min":2.5790000000000002,"15min":2.5790000000000002},"last":0.47399999999999998}]},{"osd":5,"last update":"Mon Mar 9 18:22:47 2026","interfaces":[{"interface":"back","average":{"1min":0.81599999999999995,"5min":0.81599999999999995,"15min":0.81599999999999995},"min":{"1min":0.49399999999999999,"5min":0.49399999999999999,"15min":0.49399999999999999},"max":{"1min":2.6669999999999998,"5min":2.6669999999999998,"15min":2.6669999999999998},"last":0.46500000000000002},{"interface":"front","average":{"1min":0.82299999999999995,"5min":0.82299999999999995,"15min":0.82299999999999995},"min":{"1min":0.48699999999999999,"5min":0.48699999999999999,"15min":0.48699999999999999},"max":{"1min":1.9630000000000001,"5min":1.9630000000000001,"15min":1.9630000000000001},"last":1.149}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.52600000000000002}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":1.093}]}]},{"osd":2,"up_from":18,"seq":77309411352,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":6376,"kb_used_data":480,"kb_used_omap":0,"kb_used_meta":5888,"kb_avail":20961048,"statfs":{"total":21470642176,"available":21464113152,"internally_reserved":0,"allocated":491520,"data_stored":198028,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":6029312},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Mon Mar 9 18:22:00 2026","interfaces":[{"interface":"back","average":{"1min":0.55800000000000005,"5min":0.55800000000000005,"15min":0.55800000000000005},"min":{"1min":0.315,"5min":0.315,"15min":0.315},"max":{"1min":1.1359999999999999,"5min":1.1359999999999999,"15min":1.1359999999999999},"last":0.96799999999999997},{"interface":"front","average":{"1min":0.498,"5min":0.498,"15min":0.498},"min":{"1min":0.31,"5min":0.31,"15min":0.31},"max":{"1min":0.74099999999999999,"5min":0.74099999999999999,"15min":0.74099999999999999},"last":0.35699999999999998}]},{"osd":1,"last update":"Mon Mar 9 18:22:00 2026","interfaces":[{"interface":"back","average":{"1min":0.58199999999999996,"5min":0.58199999999999996,"15min":0.58199999999999996},"min":{"1min":0.318,"5min":0.318,"15min":0.318},"max":{"1min":1.2310000000000001,"5min":1.2310000000000001,"15min":1.2310000000000001},"last":0.89400000000000002},{"interface":"front","average":{"1min":0.56799999999999995,"5min":0.56799999999999995,"15min":0.56799999999999995},"min":{"1min":0.28000000000000003,"5min":0.28000000000000003,"15min":0.28000000000000003},"max":{"1min":1.119,"5min":1.119,"15min":1.119},"last":0.93999999999999995}]},{"osd":3,"last update":"Mon Mar 9 18:22:14 2026","interfaces":[{"interface":"back","average":{"1min":0.59299999999999997,"5min":0.59299999999999997,"15min":0.59299999999999997},"min":{"1min":0.38400000000000001,"5min":0.38400000000000001,"15min":0.38400000000000001},"max":{"1min":1.077,"5min":1.077,"15min":1.077},"last":0.90400000000000003},{"interface":"front","average":{"1min":0.65000000000000002,"5min":0.65000000000000002,"15min":0.65000000000000002},"min":{"1min":0.40600000000000003,"5min":0.40600000000000003,"15min":0.40600000000000003},"max":{"1min":1.1850000000000001,"5min":1.1850000000000001,"15min":1.1850000000000001},"last":0.95699999999999996}]},{"osd":4,"last update":"Mon Mar 9 18:22:29 2026","interfaces":[{"interface":"back","average":{"1min":0.68000000000000005,"5min":0.68000000000000005,"15min":0.68000000000000005},"min":{"1min":0.38100000000000001,"5min":0.38100000000000001,"15min":0.38100000000000001},"max":{"1min":1.145,"5min":1.145,"15min":1.145},"last":0.82299999999999995},{"interface":"front","average":{"1min":0.71599999999999997,"5min":0.71599999999999997,"15min":0.71599999999999997},"min":{"1min":0.47099999999999997,"5min":0.47099999999999997,"15min":0.47099999999999997},"max":{"1min":1.1599999999999999,"5min":1.1599999999999999,"15min":1.1599999999999999},"last":0.94899999999999995}]},{"osd":5,"last update":"Mon Mar 9 18:22:43 2026","interfaces":[{"interface":"back","average":{"1min":0.73099999999999998,"5min":0.73099999999999998,"15min":0.73099999999999998},"min":{"1min":0.441,"5min":0.441,"15min":0.441},"max":{"1min":1.1759999999999999,"5min":1.1759999999999999,"15min":1.1759999999999999},"last":0.85299999999999998},{"interface":"front","average":{"1min":0.74299999999999999,"5min":0.74299999999999999,"15min":0.74299999999999999},"min":{"1min":0.51400000000000001,"5min":0.51400000000000001,"15min":0.51400000000000001},"max":{"1min":1.1970000000000001,"5min":1.1970000000000001,"15min":1.1970000000000001},"last":0.91200000000000003}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.97699999999999998}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.84099999999999997}]}]},{"osd":3,"up_from":25,"seq":107374182423,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":5928,"kb_used_data":480,"kb_used_omap":0,"kb_used_meta":5440,"kb_avail":20961496,"statfs":{"total":21470642176,"available":21464571904,"internally_reserved":0,"allocated":491520,"data_stored":198028,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5570560},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Mon Mar 9 18:22:12 2026","interfaces":[{"interface":"back","average":{"1min":0.56299999999999994,"5min":0.56299999999999994,"15min":0.56299999999999994},"min":{"1min":0.28799999999999998,"5min":0.28799999999999998,"15min":0.28799999999999998},"max":{"1min":0.79000000000000004,"5min":0.79000000000000004,"15min":0.79000000000000004},"last":0.53000000000000003},{"interface":"front","average":{"1min":0.55600000000000005,"5min":0.55600000000000005,"15min":0.55600000000000005},"min":{"1min":0.316,"5min":0.316,"15min":0.316},"max":{"1min":0.86299999999999999,"5min":0.86299999999999999,"15min":0.86299999999999999},"last":0.60099999999999998}]},{"osd":1,"last update":"Mon Mar 9 18:22:12 2026","interfaces":[{"interface":"back","average":{"1min":0.59899999999999998,"5min":0.59899999999999998,"15min":0.59899999999999998},"min":{"1min":0.26000000000000001,"5min":0.26000000000000001,"15min":0.26000000000000001},"max":{"1min":1.2090000000000001,"5min":1.2090000000000001,"15min":1.2090000000000001},"last":0.64200000000000002},{"interface":"front","average":{"1min":0.60799999999999998,"5min":0.60799999999999998,"15min":0.60799999999999998},"min":{"1min":0.28100000000000003,"5min":0.28100000000000003,"15min":0.28100000000000003},"max":{"1min":1.095,"5min":1.095,"15min":1.095},"last":0.54700000000000004}]},{"osd":2,"last update":"Mon Mar 9 18:22:12 2026","interfaces":[{"interface":"back","average":{"1min":0.61899999999999999,"5min":0.61899999999999999,"15min":0.61899999999999999},"min":{"1min":0.307,"5min":0.307,"15min":0.307},"max":{"1min":0.90100000000000002,"5min":0.90100000000000002,"15min":0.90100000000000002},"last":0.80400000000000005},{"interface":"front","average":{"1min":0.57599999999999996,"5min":0.57599999999999996,"15min":0.57599999999999996},"min":{"1min":0.28199999999999997,"5min":0.28199999999999997,"15min":0.28199999999999997},"max":{"1min":0.78600000000000003,"5min":0.78600000000000003,"15min":0.78600000000000003},"last":0.81799999999999995}]},{"osd":4,"last update":"Mon Mar 9 18:22:30 2026","interfaces":[{"interface":"back","average":{"1min":0.77100000000000002,"5min":0.77100000000000002,"15min":0.77100000000000002},"min":{"1min":0.48599999999999999,"5min":0.48599999999999999,"15min":0.48599999999999999},"max":{"1min":2.6269999999999998,"5min":2.6269999999999998,"15min":2.6269999999999998},"last":0.71299999999999997},{"interface":"front","average":{"1min":0.84299999999999997,"5min":0.84299999999999997,"15min":0.84299999999999997},"min":{"1min":0.46999999999999997,"5min":0.46999999999999997,"15min":0.46999999999999997},"max":{"1min":2.6890000000000001,"5min":2.6890000000000001,"15min":2.6890000000000001},"last":0.56000000000000005}]},{"osd":5,"last update":"Mon Mar 9 18:22:47 2026","interfaces":[{"interface":"back","average":{"1min":0.872,"5min":0.872,"15min":0.872},"min":{"1min":0.48699999999999999,"5min":0.48699999999999999,"15min":0.48699999999999999},"max":{"1min":2.8260000000000001,"5min":2.8260000000000001,"15min":2.8260000000000001},"last":0.57099999999999995},{"interface":"front","average":{"1min":0.91900000000000004,"5min":0.91900000000000004,"15min":0.91900000000000004},"min":{"1min":0.55200000000000005,"5min":0.55200000000000005,"15min":0.55200000000000005},"max":{"1min":2.8780000000000001,"5min":2.8780000000000001,"15min":2.8780000000000001},"last":0.88700000000000001}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.65900000000000003}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.83399999999999996}]}]},{"osd":4,"up_from":30,"seq":128849018898,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":5928,"kb_used_data":480,"kb_used_omap":0,"kb_used_meta":5440,"kb_avail":20961496,"statfs":{"total":21470642176,"available":21464571904,"internally_reserved":0,"allocated":491520,"data_stored":198028,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5570560},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Mon Mar 9 18:22:27 2026","interfaces":[{"interface":"back","average":{"1min":0.67600000000000005,"5min":0.67600000000000005,"15min":0.67600000000000005},"min":{"1min":0.42799999999999999,"5min":0.42799999999999999,"15min":0.42799999999999999},"max":{"1min":0.995,"5min":0.995,"15min":0.995},"last":0.76100000000000001},{"interface":"front","average":{"1min":0.60099999999999998,"5min":0.60099999999999998,"15min":0.60099999999999998},"min":{"1min":0.443,"5min":0.443,"15min":0.443},"max":{"1min":0.95499999999999996,"5min":0.95499999999999996,"15min":0.95499999999999996},"last":0.70499999999999996}]},{"osd":1,"last update":"Mon Mar 9 18:22:27 2026","interfaces":[{"interface":"back","average":{"1min":0.69699999999999995,"5min":0.69699999999999995,"15min":0.69699999999999995},"min":{"1min":0.48299999999999998,"5min":0.48299999999999998,"15min":0.48299999999999998},"max":{"1min":1.0209999999999999,"5min":1.0209999999999999,"15min":1.0209999999999999},"last":0.73399999999999999},{"interface":"front","average":{"1min":0.73999999999999999,"5min":0.73999999999999999,"15min":0.73999999999999999},"min":{"1min":0.504,"5min":0.504,"15min":0.504},"max":{"1min":1.5409999999999999,"5min":1.5409999999999999,"15min":1.5409999999999999},"last":0.69599999999999995}]},{"osd":2,"last update":"Mon Mar 9 18:22:27 2026","interfaces":[{"interface":"back","average":{"1min":0.77900000000000003,"5min":0.77900000000000003,"15min":0.77900000000000003},"min":{"1min":0.40999999999999998,"5min":0.40999999999999998,"15min":0.40999999999999998},"max":{"1min":1.5289999999999999,"5min":1.5289999999999999,"15min":1.5289999999999999},"last":0.66600000000000004},{"interface":"front","average":{"1min":0.76800000000000002,"5min":0.76800000000000002,"15min":0.76800000000000002},"min":{"1min":0.45000000000000001,"5min":0.45000000000000001,"15min":0.45000000000000001},"max":{"1min":1.899,"5min":1.899,"15min":1.899},"last":0.77600000000000002}]},{"osd":3,"last update":"Mon Mar 9 18:22:27 2026","interfaces":[{"interface":"back","average":{"1min":0.72099999999999997,"5min":0.72099999999999997,"15min":0.72099999999999997},"min":{"1min":0.39900000000000002,"5min":0.39900000000000002,"15min":0.39900000000000002},"max":{"1min":1.242,"5min":1.242,"15min":1.242},"last":0.79600000000000004},{"interface":"front","average":{"1min":0.751,"5min":0.751,"15min":0.751},"min":{"1min":0.44,"5min":0.44,"15min":0.44},"max":{"1min":1.0580000000000001,"5min":1.0580000000000001,"15min":1.0580000000000001},"last":0.748}]},{"osd":5,"last update":"Mon Mar 9 18:22:44 2026","interfaces":[{"interface":"back","average":{"1min":0.64900000000000002,"5min":0.64900000000000002,"15min":0.64900000000000002},"min":{"1min":0.432,"5min":0.432,"15min":0.432},"max":{"1min":1.0529999999999999,"5min":1.0529999999999999,"15min":1.0529999999999999},"last":0.74099999999999999},{"interface":"front","average":{"1min":0.71799999999999997,"5min":0.71799999999999997,"15min":0.71799999999999997},"min":{"1min":0.38900000000000001,"5min":0.38900000000000001,"15min":0.38900000000000001},"max":{"1min":1.9259999999999999,"5min":1.9259999999999999,"15min":1.9259999999999999},"last":0.80500000000000005}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.72099999999999997}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.81200000000000006}]}]},{"osd":5,"up_from":36,"seq":154618822672,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":5860,"kb_used_data":476,"kb_used_omap":0,"kb_used_meta":5376,"kb_avail":20961564,"statfs":{"total":21470642176,"available":21464641536,"internally_reserved":0,"allocated":487424,"data_stored":197713,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5505024},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Mon Mar 9 18:22:45 2026","interfaces":[{"interface":"back","average":{"1min":0.65900000000000003,"5min":0.65900000000000003,"15min":0.65900000000000003},"min":{"1min":0.40000000000000002,"5min":0.40000000000000002,"15min":0.40000000000000002},"max":{"1min":0.94099999999999995,"5min":0.94099999999999995,"15min":0.94099999999999995},"last":0.69499999999999995},{"interface":"front","average":{"1min":0.69399999999999995,"5min":0.69399999999999995,"15min":0.69399999999999995},"min":{"1min":0.41999999999999998,"5min":0.41999999999999998,"15min":0.41999999999999998},"max":{"1min":0.90100000000000002,"5min":0.90100000000000002,"15min":0.90100000000000002},"last":0.54300000000000004}]},{"osd":1,"last update":"Mon Mar 9 18:22:45 2026","interfaces":[{"interface":"back","average":{"1min":0.70399999999999996,"5min":0.70399999999999996,"15min":0.70399999999999996},"min":{"1min":0.434,"5min":0.434,"15min":0.434},"max":{"1min":1.0109999999999999,"5min":1.0109999999999999,"15min":1.0109999999999999},"last":0.72199999999999998},{"interface":"front","average":{"1min":0.71999999999999997,"5min":0.71999999999999997,"15min":0.71999999999999997},"min":{"1min":0.40999999999999998,"5min":0.40999999999999998,"15min":0.40999999999999998},"max":{"1min":0.96699999999999997,"5min":0.96699999999999997,"15min":0.96699999999999997},"last":0.96699999999999997}]},{"osd":2,"last update":"Mon Mar 9 18:22:45 2026","interfaces":[{"interface":"back","average":{"1min":0.71899999999999997,"5min":0.71899999999999997,"15min":0.71899999999999997},"min":{"1min":0.436,"5min":0.436,"15min":0.436},"max":{"1min":1.1240000000000001,"5min":1.1240000000000001,"15min":1.1240000000000001},"last":1.1240000000000001},{"interface":"front","average":{"1min":0.73799999999999999,"5min":0.73799999999999999,"15min":0.73799999999999999},"min":{"1min":0.46500000000000002,"5min":0.46500000000000002,"15min":0.46500000000000002},"max":{"1min":1.077,"5min":1.077,"15min":1.077},"last":1.077}]},{"osd":3,"last update":"Mon Mar 9 18:22:45 2026","interfaces":[{"interface":"back","average":{"1min":0.73599999999999999,"5min":0.73599999999999999,"15min":0.73599999999999999},"min":{"1min":0.50700000000000001,"5min":0.50700000000000001,"15min":0.50700000000000001},"max":{"1min":1.0489999999999999,"5min":1.0489999999999999,"15min":1.0489999999999999},"last":0.94699999999999995},{"interface":"front","average":{"1min":0.746,"5min":0.746,"15min":0.746},"min":{"1min":0.38700000000000001,"5min":0.38700000000000001,"15min":0.38700000000000001},"max":{"1min":1.0229999999999999,"5min":1.0229999999999999,"15min":1.0229999999999999},"last":0.94199999999999995}]},{"osd":4,"last update":"Mon Mar 9 18:22:45 2026","interfaces":[{"interface":"back","average":{"1min":0.66400000000000003,"5min":0.66400000000000003,"15min":0.66400000000000003},"min":{"1min":0.36799999999999999,"5min":0.36799999999999999,"15min":0.36799999999999999},"max":{"1min":0.95499999999999996,"5min":0.95499999999999996,"15min":0.95499999999999996},"last":0.60699999999999998},{"interface":"front","average":{"1min":0.64400000000000002,"5min":0.64400000000000002,"15min":0.64400000000000002},"min":{"1min":0.39600000000000002,"5min":0.39600000000000002,"15min":0.39600000000000002},"max":{"1min":1.0269999999999999,"5min":1.0269999999999999,"15min":1.0269999999999999},"last":0.92800000000000005}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.81000000000000005}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.52300000000000002}]}]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-09T18:22:56.333 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-09T18:22:56.334 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-09T18:22:56.334 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-09T18:22:56.334 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph health --format=json 2026-03-09T18:22:57.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:57 vm00 bash[22468]: audit 2026-03-09T18:22:56.271392+0000 mgr.y (mgr.24335) 35 : audit [DBG] from='client.24436 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:22:57.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:57 vm00 bash[22468]: cluster 2026-03-09T18:22:56.428455+0000 mgr.y (mgr.24335) 36 : cluster [DBG] pgmap v17: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:57.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:57 vm00 bash[17468]: audit 2026-03-09T18:22:56.271392+0000 mgr.y (mgr.24335) 35 : audit [DBG] from='client.24436 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:22:57.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:57 vm00 bash[17468]: cluster 2026-03-09T18:22:56.428455+0000 mgr.y (mgr.24335) 36 : cluster [DBG] pgmap v17: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:57.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:57 vm08 bash[17774]: audit 2026-03-09T18:22:56.271392+0000 mgr.y (mgr.24335) 35 : audit [DBG] from='client.24436 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:22:57.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:57 vm08 bash[17774]: cluster 2026-03-09T18:22:56.428455+0000 mgr.y (mgr.24335) 36 : cluster [DBG] pgmap v17: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:58.326 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:58 vm08 bash[18535]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:22:57] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:22:58.927 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:58 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:58.927 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:58 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:58.927 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:58 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:58.928 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:22:58 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:58.928 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:22:58 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:58.928 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:22:58 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:58.928 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:22:58 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:58.928 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:22:58 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:58.928 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:22:58 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:58.928 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:22:58 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:58.928 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:22:58 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:58.928 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:22:58 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:58.928 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:22:58 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:58.928 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:22:58 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:58.928 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:58 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:58.928 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:22:58 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:58.928 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:58 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:58.928 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:58 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:58.928 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:58 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:58.928 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:58 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:58.928 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:58 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:22:58.928 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:58 vm08 systemd[1]: Started Ceph grafana.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:22:58.953 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/mon.c/config 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="The state of unified alerting is still not defined. The decision will be made during as we run the database migrations" logger=settings 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=warn msg="falling back to legacy setting of 'min_interval_seconds'; please use the configuration option in the `unified_alerting` section if Grafana 8 alerts are enabled." logger=settings 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Config loaded from" logger=settings file=/usr/share/grafana/conf/defaults.ini 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Config loaded from" logger=settings file=/etc/grafana/grafana.ini 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_DATA=/var/lib/grafana" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_LOGS=/var/log/grafana" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Path Home" logger=settings path=/usr/share/grafana 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Path Data" logger=settings path=/var/lib/grafana 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Path Logs" logger=settings path=/var/log/grafana 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Path Plugins" logger=settings path=/var/lib/grafana/plugins 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Path Provisioning" logger=settings path=/etc/grafana/provisioning 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="App mode production" logger=settings 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Connecting to DB" logger=sqlstore dbtype=sqlite3 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=warn msg="SQLite database file has broader permissions than it should" logger=sqlstore path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r----- 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Starting DB migrations" logger=migrator 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create migration_log table" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create user table" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index user.login" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index user.email" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_user_login - v1" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_user_email - v1" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table user to user_v1 - v1" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create user table v2" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_user_login - v2" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_user_email - v2" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="copy data_source v1 to v2" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old table user_v1" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add column help_flags1 to user table" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Update user table charset" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add last_seen_at column to user" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add missing user data" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add is_disabled column to user" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add index user.login/user.email" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add is_service_account column to user" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create temp user table v1-7" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_email - v1-7" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_org_id - v1-7" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_code - v1-7" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_status - v1-7" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Update temp_user table charset" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_temp_user_email - v1" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_temp_user_org_id - v1" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_temp_user_code - v1" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_temp_user_status - v1" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table temp_user to temp_user_tmp_qwerty - v1" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create temp_user v2" 2026-03-09T18:22:59.179 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_email - v2" 2026-03-09T18:22:59.180 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_org_id - v2" 2026-03-09T18:22:59.180 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_code - v2" 2026-03-09T18:22:59.180 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_status - v2" 2026-03-09T18:22:59.180 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="copy temp_user v1 to v2" 2026-03-09T18:22:59.180 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop temp_user_tmp_qwerty" 2026-03-09T18:22:59.180 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Set created for temp users that will otherwise prematurely expire" 2026-03-09T18:22:59.180 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create star table" 2026-03-09T18:22:59.180 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index star.user_id_dashboard_id" 2026-03-09T18:22:59.395 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:22:59.395 INFO:teuthology.orchestra.run.vm00.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create org table v1" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_org_name - v1" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create org_user table v1" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_org_user_org_id - v1" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_org_user_org_id_user_id - v1" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_org_user_user_id - v1" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Update org table charset" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Update org_user table charset" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Migrate all Read Only Viewers to Viewers" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard table" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard.account_id" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index dashboard_account_id_slug" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_tag table" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index dashboard_tag.dasboard_id_term" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table dashboard to dashboard_v1 - v1" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard v2" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_dashboard_org_id - v2" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_dashboard_org_id_slug - v2" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="copy dashboard v1 to v2" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop table dashboard_v1" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="alter dashboard.data to mediumtext v1" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add column updated_by in dashboard - v2" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add column created_by in dashboard - v2" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add column gnetId in dashboard" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for gnetId in dashboard" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add column plugin_id in dashboard" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for plugin_id in dashboard" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for dashboard_id in dashboard_tag" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Update dashboard table charset" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Update dashboard_tag table charset" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add column folder_id in dashboard" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add column isFolder in dashboard" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add column has_acl in dashboard" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add column uid in dashboard" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Update uid column values in dashboard" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index dashboard_org_id_uid" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Remove unique index org_id_slug" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Update dashboard title length" 2026-03-09T18:22:59.429 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index for dashboard_org_id_title_folder_id" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_provisioning" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_provisioning v2" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_dashboard_provisioning_dashboard_id - v2" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="copy dashboard_provisioning v1 to v2" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop dashboard_provisioning_tmp_qwerty" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add check_sum column" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for dashboard_title" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="delete tags for deleted dashboards" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="delete stars for deleted dashboards" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for dashboard_is_folder" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create data_source table" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index data_source.account_id" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index data_source.account_id_name" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_data_source_account_id - v1" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_data_source_account_id_name - v1" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table data_source to data_source_v1 - v1" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create data_source table v2" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_data_source_org_id - v2" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_data_source_org_id_name - v2" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="copy data_source v1 to v2" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old table data_source_v1 #2" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add column with_credentials" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add secure json data column" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Update data_source table charset" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Update initial version to 1" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add read_only data column" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Migrate logging ds to loki ds" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Update json_data with nulls" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add uid column" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Update uid value" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index datasource_org_id_uid" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index datasource_org_id_is_default" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create api_key table" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index api_key.account_id" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index api_key.key" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index api_key.account_id_name" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_api_key_account_id - v1" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_api_key_key - v1" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_api_key_account_id_name - v1" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table api_key to api_key_v1 - v1" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create api_key table v2" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_api_key_org_id - v2" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_api_key_key - v2" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_api_key_org_id_name - v2" 2026-03-09T18:22:59.430 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="copy api_key v1 to v2" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old table api_key_v1" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Update api_key table charset" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add expires to api_key table" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add service account foreign key" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_snapshot table v4" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop table dashboard_snapshot_v4 #1" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_snapshot table v5 #2" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_dashboard_snapshot_key - v5" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_dashboard_snapshot_delete_key - v5" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_dashboard_snapshot_user_id - v5" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="alter dashboard_snapshot to mediumtext v2" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Update dashboard_snapshot table charset" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add column external_delete_url to dashboard_snapshots table" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add encrypted dashboard json column" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Change dashboard_encrypted column to MEDIUMBLOB" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create quota table v1" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_quota_org_id_user_id_target - v1" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Update quota table charset" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create plugin_setting table" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_plugin_setting_org_id_plugin_id - v1" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add column plugin_version to plugin_settings" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Update plugin_setting table charset" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create session table" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old table playlist table" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old table playlist_item table" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create playlist table v2" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create playlist item table v2" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Update playlist table charset" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Update playlist_item table charset" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop preferences table v2" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop preferences table v3" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create preferences table v3" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Update preferences table charset" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add column team_id in preferences" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Update team_id column values in preferences" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add column week_start in preferences" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create alert table v1" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index alert org_id & id " 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index alert state" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index alert dashboard_id" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Create alert_rule_tag table v1" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index alert_rule_tag.alert_id_tag_id" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Create alert_rule_tag table v2" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="copy alert_rule_tag v1 to v2" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop table alert_rule_tag_v1" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create alert_notification table v1" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add column is_default" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add column frequency" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add column send_reminder" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add column disable_resolve_message" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index alert_notification org_id & name" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Update alert table charset" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Update alert_notification table charset" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create notification_journal table v1" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index notification_journal org_id & alert_id & notifier_id" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop alert_notification_journal" 2026-03-09T18:22:59.431 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create alert_notification_state table v1" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index alert_notification_state org_id & alert_id & notifier_id" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add for to alert table" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add column uid in alert_notification" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Update uid column values in alert_notification" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index alert_notification_org_id_uid" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Remove unique index org_id_name" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add column secure_settings in alert_notification" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="alter alert.settings to mediumtext" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add non-unique index alert_notification_state_alert_id" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add non-unique index alert_rule_tag_alert_id" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old annotation table v4" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create annotation table v5" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index annotation 0 v3" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index annotation 1 v3" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index annotation 2 v3" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index annotation 3 v3" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index annotation 4 v3" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Update annotation table charset" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add column region_id to annotation table" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Drop category_id index" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add column tags to annotation table" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Create annotation_tag table v2" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index annotation_tag.annotation_id_tag_id" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table annotation_tag to annotation_tag_v2 - v2" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Create annotation_tag table v3" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="copy annotation_tag v2 to v3" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop table annotation_tag_v2" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Update alert annotations and set TEXT to empty" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add created time to annotation table" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add updated time to annotation table" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for created in annotation table" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for updated in annotation table" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Convert existing annotations from seconds to milliseconds" 2026-03-09T18:22:59.432 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add epoch_end column" 2026-03-09T18:22:59.460 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-09T18:22:59.460 INFO:tasks.cephadm:Setup complete, yielding 2026-03-09T18:22:59.460 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-09T18:22:59.462 INFO:tasks.cephadm:Running commands on role mon.a host ubuntu@vm00.local 2026-03-09T18:22:59.462 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'radosgw-admin realm create --rgw-realm=r --default' 2026-03-09T18:22:59.680 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for epoch_end" 2026-03-09T18:22:59.680 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Make epoch_end the same as epoch" 2026-03-09T18:22:59.680 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Move region to single row" 2026-03-09T18:22:59.680 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Remove index org_id_epoch from annotation table" 2026-03-09T18:22:59.680 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 2026-03-09T18:22:59.680 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 2026-03-09T18:22:59.680 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for org_id_epoch_end_epoch on annotation table" 2026-03-09T18:22:59.680 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Remove index org_id_epoch_epoch_end from annotation table" 2026-03-09T18:22:59.680 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for alert_id on annotation table" 2026-03-09T18:22:59.680 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create test_data table" 2026-03-09T18:22:59.680 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_version table v1" 2026-03-09T18:22:59.680 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard_version.dashboard_id" 2026-03-09T18:22:59.680 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 2026-03-09T18:22:59.680 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Set dashboard version to 1 where 0" 2026-03-09T18:22:59.680 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="save existing dashboard data in dashboard_version table v1" 2026-03-09T18:22:59.680 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="alter dashboard_version.data to mediumtext v1" 2026-03-09T18:22:59.680 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create team table" 2026-03-09T18:22:59.680 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index team.org_id" 2026-03-09T18:22:59.680 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index team_org_id_name" 2026-03-09T18:22:59.680 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create team member table" 2026-03-09T18:22:59.680 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index team_member.org_id" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index team_member_org_id_team_id_user_id" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index team_member.team_id" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add column email to team table" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add column external to team_member table" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add column permission to team_member table" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard acl table" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard_acl_dashboard_id" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index dashboard_acl_dashboard_id_user_id" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index dashboard_acl_dashboard_id_team_id" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard_acl_user_id" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard_acl_team_id" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard_acl_org_id_role" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard_permission" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="save default acl rules in dashboard_acl table" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="delete acl rules for deleted dashboards and folders" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create tag table" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index tag.key_value" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create login attempt table" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index login_attempt.username" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_login_attempt_username - v1" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create login_attempt v2" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_login_attempt_username - v2" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="copy login_attempt v1 to v2" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop login_attempt_tmp_qwerty" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create user auth table" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_user_auth_auth_module_auth_id - v1" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="alter user_auth.auth_id to length 190" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add OAuth access token to user_auth" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add OAuth refresh token to user_auth" 2026-03-09T18:22:59.681 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add OAuth token type to user_auth" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add OAuth expiry to user_auth" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add index to user_id column in user_auth" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create server_lock table" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index server_lock.operation_uid" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create user auth token table" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index user_auth_token.auth_token" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index user_auth_token.prev_auth_token" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index user_auth_token.user_id" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add revoked_at to the user auth token" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create cache_data table" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index cache_data.cache_key" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create short_url table v1" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index short_url.org_id-uid" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="delete alert_definition table" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="recreate alert_definition table" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_definition on org_id and title columns" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_definition on org_id and uid columns" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="alter alert_definition table data column to mediumtext in mysql" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop index in alert_definition on org_id and title columns" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop index in alert_definition on org_id and uid columns" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index in alert_definition on org_id and title columns" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index in alert_definition on org_id and uid columns" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add column paused in alert_definition" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop alert_definition table" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="delete alert_definition_version table" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="recreate alert_definition_version table" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_definition_version table on alert_definition_id and version columns" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_definition_version table on alert_definition_uid and version columns" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="alter alert_definition_version table data column to mediumtext in mysql" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="drop alert_definition_version table" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create alert_instance table" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_instance table on def_org_id, current_state columns" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add column current_state_end to alert_instance" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="remove index def_org_id, def_uid, current_state on alert_instance" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="remove index def_org_id, current_state on alert_instance" 2026-03-09T18:22:59.682 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="rename def_org_id to rule_org_id in alert_instance" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="rename def_uid to rule_uid in alert_instance" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index rule_org_id, rule_uid, current_state on alert_instance" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index rule_org_id, current_state on alert_instance" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create alert_rule table" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_rule on org_id and title columns" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_rule on org_id and uid columns" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="alter alert_rule table data column to mediumtext in mysql" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add column for to alert_rule" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add column annotations to alert_rule" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add column labels to alert_rule" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="remove unique index from alert_rule on org_id, title columns" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_rule on org_id, namespase_uid and title columns" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add dashboard_uid column to alert_rule" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add panel_id column to alert_rule" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create alert_rule_version table" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="alter alert_rule_version table data column to mediumtext in mysql" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add column for to alert_rule_version" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add column annotations to alert_rule_version" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add column labels to alert_rule_version" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id=create_alert_configuration_table 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add column default in alert_configuration" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add column org_id in alert_configuration" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_configuration table on org_id column" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id=create_ngalert_configuration_table 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index in ngalert_configuration on org_id column" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="clear migration entry \"remove unified alerting data\"" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="move dashboard alerts to unified alerting" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create library_element table v1" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index library_element org_id-folder_id-name-kind" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create library_element_connection table v1" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index library_element_connection element_id-kind-connection_id" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index library_element org_id_uid" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="clone move dashboard alerts to unified alerting" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create data_keys table" 2026-03-09T18:22:59.683 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create kv_store table v1" 2026-03-09T18:22:59.963 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index kv_store.org_id-namespace-key" 2026-03-09T18:22:59.963 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="update dashboard_uid and panel_id from existing annotations" 2026-03-09T18:22:59.963 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create permission table" 2026-03-09T18:22:59.963 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index permission.role_id" 2026-03-09T18:22:59.963 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index role_id_action_scope" 2026-03-09T18:22:59.963 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create role table" 2026-03-09T18:22:59.963 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add column display_name" 2026-03-09T18:22:59.963 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add column group_name" 2026-03-09T18:22:59.963 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index role.org_id" 2026-03-09T18:22:59.963 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index role_org_id_name" 2026-03-09T18:22:59.963 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index role_org_id_uid" 2026-03-09T18:22:59.963 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create team role table" 2026-03-09T18:22:59.963 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index team_role.org_id" 2026-03-09T18:22:59.963 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index team_role_org_id_team_id_role_id" 2026-03-09T18:22:59.963 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index team_role.team_id" 2026-03-09T18:22:59.963 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create user role table" 2026-03-09T18:22:59.963 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index user_role.org_id" 2026-03-09T18:22:59.964 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index user_role_org_id_user_id_role_id" 2026-03-09T18:22:59.964 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index user_role.user_id" 2026-03-09T18:22:59.964 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create builtin role table" 2026-03-09T18:22:59.964 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index builtin_role.role_id" 2026-03-09T18:22:59.964 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index builtin_role.name" 2026-03-09T18:22:59.964 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Add column org_id to builtin_role table" 2026-03-09T18:22:59.964 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add index builtin_role.org_id" 2026-03-09T18:22:59.964 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index builtin_role_org_id_role_id_role" 2026-03-09T18:22:59.964 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="Remove unique index role_org_id_uid" 2026-03-09T18:22:59.964 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index role.uid" 2026-03-09T18:22:59.964 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="create seed assignment table" 2026-03-09T18:22:59.964 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index builtin_role_role_name" 2026-03-09T18:22:59.964 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="migrations completed" logger=migrator performed=381 skipped=0 duration=601.473715ms 2026-03-09T18:22:59.964 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Created default organization" logger=sqlstore 2026-03-09T18:22:59.964 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Initialising plugins" logger=plugin.manager 2026-03-09T18:22:59.964 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Plugin registered" logger=plugin.manager pluginId=input 2026-03-09T18:22:59.964 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Plugin registered" logger=plugin.manager pluginId=vonage-status-panel 2026-03-09T18:22:59.964 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Plugin registered" logger=plugin.manager pluginId=grafana-piechart-panel 2026-03-09T18:22:59.964 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="Live Push Gateway initialization" logger=live.push_http 2026-03-09T18:22:59.964 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=warn msg="[Deprecated] the datasource provisioning config is outdated. please upgrade" logger=provisioning.datasources filename=/etc/grafana/provisioning/datasources/ceph-dashboard.yml 2026-03-09T18:22:59.964 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="inserting datasource from configuration " logger=provisioning.datasources name=Dashboard1 uid=P43CA22E17D0F9596 2026-03-09T18:22:59.964 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="HTTP Server Listen" logger=http.server address=[::]:3000 protocol=https subUrl= socket= 2026-03-09T18:22:59.964 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="warming cache for startup" logger=ngalert 2026-03-09T18:22:59.964 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:22:59 vm08 bash[33398]: t=2026-03-09T18:22:59+0000 lvl=info msg="starting MultiOrg Alertmanager" logger=ngalert.multiorg.alertmanager 2026-03-09T18:22:59.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:59 vm00 bash[17468]: cluster 2026-03-09T18:22:58.428804+0000 mgr.y (mgr.24335) 37 : cluster [DBG] pgmap v18: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:59.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:59 vm00 bash[17468]: audit 2026-03-09T18:22:58.958640+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:59.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:59 vm00 bash[17468]: audit 2026-03-09T18:22:58.965069+0000 mon.c (mon.1) 50 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:22:59.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:59 vm00 bash[17468]: audit 2026-03-09T18:22:58.967438+0000 mon.c (mon.1) 51 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:22:59.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:59 vm00 bash[17468]: audit 2026-03-09T18:22:58.973423+0000 mon.c (mon.1) 52 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:22:59.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:22:59 vm00 bash[17468]: audit 2026-03-09T18:22:59.395006+0000 mon.b (mon.2) 33 : audit [DBG] from='client.? 192.168.123.100:0/2862966888' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T18:22:59.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:59 vm00 bash[22468]: cluster 2026-03-09T18:22:58.428804+0000 mgr.y (mgr.24335) 37 : cluster [DBG] pgmap v18: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:22:59.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:59 vm00 bash[22468]: audit 2026-03-09T18:22:58.958640+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:22:59.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:59 vm00 bash[22468]: audit 2026-03-09T18:22:58.965069+0000 mon.c (mon.1) 50 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:22:59.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:59 vm00 bash[22468]: audit 2026-03-09T18:22:58.967438+0000 mon.c (mon.1) 51 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:22:59.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:59 vm00 bash[22468]: audit 2026-03-09T18:22:58.973423+0000 mon.c (mon.1) 52 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:22:59.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:22:59 vm00 bash[22468]: audit 2026-03-09T18:22:59.395006+0000 mon.b (mon.2) 33 : audit [DBG] from='client.? 192.168.123.100:0/2862966888' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T18:23:00.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:59 vm08 bash[17774]: cluster 2026-03-09T18:22:58.428804+0000 mgr.y (mgr.24335) 37 : cluster [DBG] pgmap v18: 1 pgs: 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:23:00.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:59 vm08 bash[17774]: audit 2026-03-09T18:22:58.958640+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:00.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:59 vm08 bash[17774]: audit 2026-03-09T18:22:58.965069+0000 mon.c (mon.1) 50 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:23:00.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:59 vm08 bash[17774]: audit 2026-03-09T18:22:58.967438+0000 mon.c (mon.1) 51 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:23:00.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:59 vm08 bash[17774]: audit 2026-03-09T18:22:58.973423+0000 mon.c (mon.1) 52 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:23:00.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:22:59 vm08 bash[17774]: audit 2026-03-09T18:22:59.395006+0000 mon.b (mon.2) 33 : audit [DBG] from='client.? 192.168.123.100:0/2862966888' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T18:23:01.059 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:23:01.059 INFO:teuthology.orchestra.run.vm00.stdout: "id": "07f33316-d515-457f-9f45-9e88bd1f5261", 2026-03-09T18:23:01.059 INFO:teuthology.orchestra.run.vm00.stdout: "name": "r", 2026-03-09T18:23:01.059 INFO:teuthology.orchestra.run.vm00.stdout: "current_period": "ddd57d0f-a0eb-4800-b8c9-c537683d4528", 2026-03-09T18:23:01.059 INFO:teuthology.orchestra.run.vm00.stdout: "epoch": 1 2026-03-09T18:23:01.059 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:23:01.117 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'radosgw-admin zonegroup create --rgw-zonegroup=default --master --default' 2026-03-09T18:23:01.267 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:00 vm00 bash[22468]: cluster 2026-03-09T18:22:59.976725+0000 mon.a (mon.0) 588 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T18:23:01.267 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:00 vm00 bash[22468]: audit 2026-03-09T18:22:59.991392+0000 mon.c (mon.1) 53 : audit [INF] from='client.? 192.168.123.100:0/1648742913' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T18:23:01.267 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:00 vm00 bash[22468]: audit 2026-03-09T18:22:59.991804+0000 mon.a (mon.0) 589 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T18:23:01.267 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:00 vm00 bash[17468]: cluster 2026-03-09T18:22:59.976725+0000 mon.a (mon.0) 588 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T18:23:01.267 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:00 vm00 bash[17468]: audit 2026-03-09T18:22:59.991392+0000 mon.c (mon.1) 53 : audit [INF] from='client.? 192.168.123.100:0/1648742913' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T18:23:01.267 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:00 vm00 bash[17468]: audit 2026-03-09T18:22:59.991804+0000 mon.a (mon.0) 589 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T18:23:01.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:00 vm08 bash[17774]: cluster 2026-03-09T18:22:59.976725+0000 mon.a (mon.0) 588 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T18:23:01.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:00 vm08 bash[17774]: audit 2026-03-09T18:22:59.991392+0000 mon.c (mon.1) 53 : audit [INF] from='client.? 192.168.123.100:0/1648742913' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T18:23:01.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:00 vm08 bash[17774]: audit 2026-03-09T18:22:59.991804+0000 mon.a (mon.0) 589 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T18:23:01.503 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:23:01.503 INFO:teuthology.orchestra.run.vm00.stdout: "id": "5cd25fd4-08df-44b3-b5a9-69d0c89b07a9", 2026-03-09T18:23:01.503 INFO:teuthology.orchestra.run.vm00.stdout: "name": "default", 2026-03-09T18:23:01.503 INFO:teuthology.orchestra.run.vm00.stdout: "api_name": "default", 2026-03-09T18:23:01.503 INFO:teuthology.orchestra.run.vm00.stdout: "is_master": "true", 2026-03-09T18:23:01.503 INFO:teuthology.orchestra.run.vm00.stdout: "endpoints": [], 2026-03-09T18:23:01.503 INFO:teuthology.orchestra.run.vm00.stdout: "hostnames": [], 2026-03-09T18:23:01.503 INFO:teuthology.orchestra.run.vm00.stdout: "hostnames_s3website": [], 2026-03-09T18:23:01.503 INFO:teuthology.orchestra.run.vm00.stdout: "master_zone": "", 2026-03-09T18:23:01.503 INFO:teuthology.orchestra.run.vm00.stdout: "zones": [], 2026-03-09T18:23:01.503 INFO:teuthology.orchestra.run.vm00.stdout: "placement_targets": [], 2026-03-09T18:23:01.503 INFO:teuthology.orchestra.run.vm00.stdout: "default_placement": "", 2026-03-09T18:23:01.503 INFO:teuthology.orchestra.run.vm00.stdout: "realm_id": "07f33316-d515-457f-9f45-9e88bd1f5261", 2026-03-09T18:23:01.503 INFO:teuthology.orchestra.run.vm00.stdout: "sync_policy": { 2026-03-09T18:23:01.503 INFO:teuthology.orchestra.run.vm00.stdout: "groups": [] 2026-03-09T18:23:01.503 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:23:01.503 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:23:01.549 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=z --master --default' 2026-03-09T18:23:02.008 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:02 vm00 bash[17468]: cluster 2026-03-09T18:23:00.429112+0000 mgr.y (mgr.24335) 38 : cluster [DBG] pgmap v20: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:23:02.008 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:02 vm00 bash[17468]: audit 2026-03-09T18:23:00.990635+0000 mon.a (mon.0) 590 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T18:23:02.008 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:02 vm00 bash[17468]: cluster 2026-03-09T18:23:00.990674+0000 mon.a (mon.0) 591 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T18:23:02.101 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:23:02.101 INFO:teuthology.orchestra.run.vm00.stdout: "id": "2acaa3ce-a2fb-4c65-8c73-331e5c0e6ec5", 2026-03-09T18:23:02.101 INFO:teuthology.orchestra.run.vm00.stdout: "name": "z", 2026-03-09T18:23:02.101 INFO:teuthology.orchestra.run.vm00.stdout: "domain_root": "z.rgw.meta:root", 2026-03-09T18:23:02.101 INFO:teuthology.orchestra.run.vm00.stdout: "control_pool": "z.rgw.control", 2026-03-09T18:23:02.101 INFO:teuthology.orchestra.run.vm00.stdout: "gc_pool": "z.rgw.log:gc", 2026-03-09T18:23:02.101 INFO:teuthology.orchestra.run.vm00.stdout: "lc_pool": "z.rgw.log:lc", 2026-03-09T18:23:02.101 INFO:teuthology.orchestra.run.vm00.stdout: "log_pool": "z.rgw.log", 2026-03-09T18:23:02.101 INFO:teuthology.orchestra.run.vm00.stdout: "intent_log_pool": "z.rgw.log:intent", 2026-03-09T18:23:02.101 INFO:teuthology.orchestra.run.vm00.stdout: "usage_log_pool": "z.rgw.log:usage", 2026-03-09T18:23:02.101 INFO:teuthology.orchestra.run.vm00.stdout: "roles_pool": "z.rgw.meta:roles", 2026-03-09T18:23:02.101 INFO:teuthology.orchestra.run.vm00.stdout: "reshard_pool": "z.rgw.log:reshard", 2026-03-09T18:23:02.101 INFO:teuthology.orchestra.run.vm00.stdout: "user_keys_pool": "z.rgw.meta:users.keys", 2026-03-09T18:23:02.101 INFO:teuthology.orchestra.run.vm00.stdout: "user_email_pool": "z.rgw.meta:users.email", 2026-03-09T18:23:02.101 INFO:teuthology.orchestra.run.vm00.stdout: "user_swift_pool": "z.rgw.meta:users.swift", 2026-03-09T18:23:02.101 INFO:teuthology.orchestra.run.vm00.stdout: "user_uid_pool": "z.rgw.meta:users.uid", 2026-03-09T18:23:02.101 INFO:teuthology.orchestra.run.vm00.stdout: "otp_pool": "z.rgw.otp", 2026-03-09T18:23:02.101 INFO:teuthology.orchestra.run.vm00.stdout: "system_key": { 2026-03-09T18:23:02.101 INFO:teuthology.orchestra.run.vm00.stdout: "access_key": "", 2026-03-09T18:23:02.101 INFO:teuthology.orchestra.run.vm00.stdout: "secret_key": "" 2026-03-09T18:23:02.101 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:23:02.102 INFO:teuthology.orchestra.run.vm00.stdout: "placement_pools": [ 2026-03-09T18:23:02.102 INFO:teuthology.orchestra.run.vm00.stdout: { 2026-03-09T18:23:02.102 INFO:teuthology.orchestra.run.vm00.stdout: "key": "default-placement", 2026-03-09T18:23:02.102 INFO:teuthology.orchestra.run.vm00.stdout: "val": { 2026-03-09T18:23:02.102 INFO:teuthology.orchestra.run.vm00.stdout: "index_pool": "z.rgw.buckets.index", 2026-03-09T18:23:02.102 INFO:teuthology.orchestra.run.vm00.stdout: "storage_classes": { 2026-03-09T18:23:02.102 INFO:teuthology.orchestra.run.vm00.stdout: "STANDARD": { 2026-03-09T18:23:02.102 INFO:teuthology.orchestra.run.vm00.stdout: "data_pool": "z.rgw.buckets.data" 2026-03-09T18:23:02.102 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:23:02.102 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:23:02.102 INFO:teuthology.orchestra.run.vm00.stdout: "data_extra_pool": "z.rgw.buckets.non-ec", 2026-03-09T18:23:02.102 INFO:teuthology.orchestra.run.vm00.stdout: "index_type": 0 2026-03-09T18:23:02.102 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:23:02.102 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:23:02.102 INFO:teuthology.orchestra.run.vm00.stdout: ], 2026-03-09T18:23:02.102 INFO:teuthology.orchestra.run.vm00.stdout: "realm_id": "07f33316-d515-457f-9f45-9e88bd1f5261", 2026-03-09T18:23:02.102 INFO:teuthology.orchestra.run.vm00.stdout: "notif_pool": "z.rgw.log:notif" 2026-03-09T18:23:02.102 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:23:02.197 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'radosgw-admin period update --rgw-realm=r --commit' 2026-03-09T18:23:02.293 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:02 vm00 bash[22468]: cluster 2026-03-09T18:23:00.429112+0000 mgr.y (mgr.24335) 38 : cluster [DBG] pgmap v20: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:23:02.293 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:02 vm00 bash[22468]: audit 2026-03-09T18:23:00.990635+0000 mon.a (mon.0) 590 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T18:23:02.293 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:02 vm00 bash[22468]: cluster 2026-03-09T18:23:00.990674+0000 mon.a (mon.0) 591 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T18:23:02.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:02 vm08 bash[17774]: cluster 2026-03-09T18:23:00.429112+0000 mgr.y (mgr.24335) 38 : cluster [DBG] pgmap v20: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:23:02.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:02 vm08 bash[17774]: audit 2026-03-09T18:23:00.990635+0000 mon.a (mon.0) 590 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T18:23:02.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:02 vm08 bash[17774]: cluster 2026-03-09T18:23:00.990674+0000 mon.a (mon.0) 591 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T18:23:02.825 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:02 vm00 systemd[1]: Stopping Ceph alertmanager.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:23:02.825 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:02 vm00 bash[42691]: Error response from daemon: No such container: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-alertmanager.a 2026-03-09T18:23:02.825 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:02 vm00 bash[38226]: level=info ts=2026-03-09T18:23:02.634Z caller=main.go:557 msg="Received SIGTERM, exiting gracefully..." 2026-03-09T18:23:02.825 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:02 vm00 bash[42705]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-alertmanager-a 2026-03-09T18:23:02.825 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:02 vm00 bash[42791]: Error response from daemon: No such container: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-alertmanager.a 2026-03-09T18:23:02.825 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:02 vm00 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@alertmanager.a.service: Deactivated successfully. 2026-03-09T18:23:02.825 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:02 vm00 systemd[1]: Stopped Ceph alertmanager.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:23:02.825 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:02 vm00 systemd[1]: Started Ceph alertmanager.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:23:03.085 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:03 vm00 bash[22468]: cluster 2026-03-09T18:23:01.992433+0000 mon.a (mon.0) 592 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T18:23:03.085 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:03 vm00 bash[22468]: audit 2026-03-09T18:23:02.124881+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:03.085 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:03 vm00 bash[22468]: audit 2026-03-09T18:23:02.132944+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:03.085 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:03 vm00 bash[22468]: audit 2026-03-09T18:23:02.139510+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:03.085 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:03 vm00 bash[22468]: cephadm 2026-03-09T18:23:02.141930+0000 mgr.y (mgr.24335) 39 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T18:23:03.085 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:03 vm00 bash[22468]: cephadm 2026-03-09T18:23:02.143763+0000 mgr.y (mgr.24335) 40 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm00 2026-03-09T18:23:03.086 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:03 vm00 bash[22468]: cluster 2026-03-09T18:23:02.429477+0000 mgr.y (mgr.24335) 41 : cluster [DBG] pgmap v23: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:23:03.086 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:03 vm00 bash[22468]: audit 2026-03-09T18:23:02.721710+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:03.086 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:03 vm00 bash[22468]: cephadm 2026-03-09T18:23:02.723424+0000 mgr.y (mgr.24335) 42 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T18:23:03.086 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:03 vm00 bash[22468]: cephadm 2026-03-09T18:23:02.726108+0000 mgr.y (mgr.24335) 43 : cephadm [INF] Reconfiguring daemon prometheus.a on vm08 2026-03-09T18:23:03.086 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:03 vm00 bash[17468]: cluster 2026-03-09T18:23:01.992433+0000 mon.a (mon.0) 592 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T18:23:03.086 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:03 vm00 bash[17468]: audit 2026-03-09T18:23:02.124881+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:03.086 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:03 vm00 bash[17468]: audit 2026-03-09T18:23:02.132944+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:03.086 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:03 vm00 bash[17468]: audit 2026-03-09T18:23:02.139510+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:03.086 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:03 vm00 bash[17468]: cephadm 2026-03-09T18:23:02.141930+0000 mgr.y (mgr.24335) 39 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T18:23:03.086 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:03 vm00 bash[17468]: cephadm 2026-03-09T18:23:02.143763+0000 mgr.y (mgr.24335) 40 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm00 2026-03-09T18:23:03.086 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:03 vm00 bash[17468]: cluster 2026-03-09T18:23:02.429477+0000 mgr.y (mgr.24335) 41 : cluster [DBG] pgmap v23: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:23:03.086 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:03 vm00 bash[17468]: audit 2026-03-09T18:23:02.721710+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:03.086 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:03 vm00 bash[17468]: cephadm 2026-03-09T18:23:02.723424+0000 mgr.y (mgr.24335) 42 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T18:23:03.086 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:03 vm00 bash[17468]: cephadm 2026-03-09T18:23:02.726108+0000 mgr.y (mgr.24335) 43 : cephadm [INF] Reconfiguring daemon prometheus.a on vm08 2026-03-09T18:23:03.086 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:02 vm00 bash[42815]: level=info ts=2026-03-09T18:23:02.823Z caller=main.go:225 msg="Starting Alertmanager" version="(version=0.23.0, branch=HEAD, revision=61046b17771a57cfd4c4a51be370ab930a4d7d54)" 2026-03-09T18:23:03.086 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:02 vm00 bash[42815]: level=info ts=2026-03-09T18:23:02.823Z caller=main.go:226 build_context="(go=go1.16.7, user=root@e21a959be8d2, date=20210825-10:48:55)" 2026-03-09T18:23:03.086 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:02 vm00 bash[42815]: level=info ts=2026-03-09T18:23:02.825Z caller=cluster.go:184 component=cluster msg="setting advertise address explicitly" addr=192.168.123.100 port=9094 2026-03-09T18:23:03.086 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:02 vm00 bash[42815]: level=info ts=2026-03-09T18:23:02.826Z caller=cluster.go:671 component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-09T18:23:03.086 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:02 vm00 bash[42815]: level=info ts=2026-03-09T18:23:02.854Z caller=coordinator.go:113 component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T18:23:03.086 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:02 vm00 bash[42815]: level=info ts=2026-03-09T18:23:02.855Z caller=coordinator.go:126 component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T18:23:03.086 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:02 vm00 bash[42815]: level=info ts=2026-03-09T18:23:02.857Z caller=main.go:518 msg=Listening address=:9093 2026-03-09T18:23:03.086 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:02 vm00 bash[42815]: level=info ts=2026-03-09T18:23:02.857Z caller=tls_config.go:191 msg="TLS is disabled." http2=false 2026-03-09T18:23:03.260 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:03 vm08 bash[17774]: cluster 2026-03-09T18:23:01.992433+0000 mon.a (mon.0) 592 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T18:23:03.261 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:03 vm08 bash[17774]: audit 2026-03-09T18:23:02.124881+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:03.261 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:03 vm08 bash[17774]: audit 2026-03-09T18:23:02.132944+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:03.261 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:03 vm08 bash[17774]: audit 2026-03-09T18:23:02.139510+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:03.261 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:03 vm08 bash[17774]: cephadm 2026-03-09T18:23:02.141930+0000 mgr.y (mgr.24335) 39 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T18:23:03.261 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:03 vm08 bash[17774]: cephadm 2026-03-09T18:23:02.143763+0000 mgr.y (mgr.24335) 40 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm00 2026-03-09T18:23:03.261 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:03 vm08 bash[17774]: cluster 2026-03-09T18:23:02.429477+0000 mgr.y (mgr.24335) 41 : cluster [DBG] pgmap v23: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:23:03.261 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:03 vm08 bash[17774]: audit 2026-03-09T18:23:02.721710+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:03.261 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:03 vm08 bash[17774]: cephadm 2026-03-09T18:23:02.723424+0000 mgr.y (mgr.24335) 42 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T18:23:03.261 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:03 vm08 bash[17774]: cephadm 2026-03-09T18:23:02.726108+0000 mgr.y (mgr.24335) 43 : cephadm [INF] Reconfiguring daemon prometheus.a on vm08 2026-03-09T18:23:03.261 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 systemd[1]: Stopping Ceph prometheus.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:23:03.261 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 bash[33898]: Error response from daemon: No such container: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-prometheus.a 2026-03-09T18:23:03.261 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 bash[33074]: ts=2026-03-09T18:23:03.125Z caller=main.go:775 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-09T18:23:03.261 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 bash[33074]: ts=2026-03-09T18:23:03.125Z caller=main.go:798 level=info msg="Stopping scrape discovery manager..." 2026-03-09T18:23:03.261 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 bash[33074]: ts=2026-03-09T18:23:03.125Z caller=main.go:812 level=info msg="Stopping notify discovery manager..." 2026-03-09T18:23:03.261 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 bash[33074]: ts=2026-03-09T18:23:03.125Z caller=main.go:834 level=info msg="Stopping scrape manager..." 2026-03-09T18:23:03.261 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 bash[33074]: ts=2026-03-09T18:23:03.125Z caller=main.go:808 level=info msg="Notify discovery manager stopped" 2026-03-09T18:23:03.261 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 bash[33074]: ts=2026-03-09T18:23:03.125Z caller=main.go:794 level=info msg="Scrape discovery manager stopped" 2026-03-09T18:23:03.261 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 bash[33074]: ts=2026-03-09T18:23:03.125Z caller=manager.go:945 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-09T18:23:03.261 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 bash[33074]: ts=2026-03-09T18:23:03.126Z caller=manager.go:955 level=info component="rule manager" msg="Rule manager stopped" 2026-03-09T18:23:03.261 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 bash[33074]: ts=2026-03-09T18:23:03.126Z caller=main.go:828 level=info msg="Scrape manager stopped" 2026-03-09T18:23:03.261 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 bash[33074]: ts=2026-03-09T18:23:03.127Z caller=notifier.go:600 level=info component=notifier msg="Stopping notification manager..." 2026-03-09T18:23:03.261 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 bash[33074]: ts=2026-03-09T18:23:03.127Z caller=main.go:1054 level=info msg="Notifier manager stopped" 2026-03-09T18:23:03.261 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 bash[33074]: ts=2026-03-09T18:23:03.127Z caller=main.go:1066 level=info msg="See you next time!" 2026-03-09T18:23:03.261 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 bash[33906]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-prometheus-a 2026-03-09T18:23:03.261 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 bash[33938]: Error response from daemon: No such container: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-prometheus.a 2026-03-09T18:23:03.261 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@prometheus.a.service: Deactivated successfully. 2026-03-09T18:23:03.261 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 systemd[1]: Stopped Ceph prometheus.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:23:03.261 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 systemd[1]: Started Ceph prometheus.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:23:03.726 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 bash[33963]: ts=2026-03-09T18:23:03.340Z caller=main.go:475 level=info msg="No time or size retention was set so using the default time retention" duration=15d 2026-03-09T18:23:03.726 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 bash[33963]: ts=2026-03-09T18:23:03.341Z caller=main.go:512 level=info msg="Starting Prometheus" version="(version=2.33.4, branch=HEAD, revision=83032011a5d3e6102624fe58241a374a7201fee8)" 2026-03-09T18:23:03.726 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 bash[33963]: ts=2026-03-09T18:23:03.341Z caller=main.go:517 level=info build_context="(go=go1.17.7, user=root@d13bf69e7be8, date=20220222-16:51:28)" 2026-03-09T18:23:03.726 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 bash[33963]: ts=2026-03-09T18:23:03.341Z caller=main.go:518 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm08 (none))" 2026-03-09T18:23:03.726 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 bash[33963]: ts=2026-03-09T18:23:03.341Z caller=main.go:519 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-09T18:23:03.726 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 bash[33963]: ts=2026-03-09T18:23:03.341Z caller=main.go:520 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-09T18:23:03.726 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 bash[33963]: ts=2026-03-09T18:23:03.342Z caller=web.go:570 level=info component=web msg="Start listening for connections" address=:9095 2026-03-09T18:23:03.726 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 bash[33963]: ts=2026-03-09T18:23:03.342Z caller=main.go:923 level=info msg="Starting TSDB ..." 2026-03-09T18:23:03.726 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 bash[33963]: ts=2026-03-09T18:23:03.343Z caller=tls_config.go:195 level=info component=web msg="TLS is disabled." http2=false 2026-03-09T18:23:03.726 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 bash[33963]: ts=2026-03-09T18:23:03.345Z caller=head.go:493 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-09T18:23:03.726 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 bash[33963]: ts=2026-03-09T18:23:03.345Z caller=head.go:527 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.613µs 2026-03-09T18:23:03.726 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:03 vm08 bash[33963]: ts=2026-03-09T18:23:03.345Z caller=head.go:533 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-09T18:23:04.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:04 vm08 bash[17774]: audit 2026-03-09T18:23:03.194519+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:04.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:04 vm08 bash[17774]: audit 2026-03-09T18:23:03.196942+0000 mon.c (mon.1) 54 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T18:23:04.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:04 vm08 bash[17774]: audit 2026-03-09T18:23:03.197463+0000 mgr.y (mgr.24335) 44 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T18:23:04.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:04 vm08 bash[17774]: audit 2026-03-09T18:23:03.198233+0000 mon.c (mon.1) 55 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.123.100:9093"}]: dispatch 2026-03-09T18:23:04.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:04 vm08 bash[17774]: audit 2026-03-09T18:23:03.198562+0000 mgr.y (mgr.24335) 45 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.123.100:9093"}]: dispatch 2026-03-09T18:23:04.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:04 vm08 bash[17774]: audit 2026-03-09T18:23:03.208063+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:04.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:04 vm08 bash[17774]: audit 2026-03-09T18:23:03.213800+0000 mon.c (mon.1) 56 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T18:23:04.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:04 vm08 bash[17774]: audit 2026-03-09T18:23:03.214347+0000 mgr.y (mgr.24335) 46 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T18:23:04.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:04 vm08 bash[17774]: audit 2026-03-09T18:23:03.215203+0000 mon.c (mon.1) 57 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.123.108:3000"}]: dispatch 2026-03-09T18:23:04.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:04 vm08 bash[17774]: audit 2026-03-09T18:23:03.215575+0000 mgr.y (mgr.24335) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.123.108:3000"}]: dispatch 2026-03-09T18:23:04.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:04 vm08 bash[17774]: audit 2026-03-09T18:23:03.223646+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:04.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:04 vm08 bash[17774]: audit 2026-03-09T18:23:03.228837+0000 mon.c (mon.1) 58 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:23:04.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:04 vm08 bash[17774]: audit 2026-03-09T18:23:03.229403+0000 mgr.y (mgr.24335) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:23:04.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:04 vm08 bash[17774]: audit 2026-03-09T18:23:03.233554+0000 mon.c (mon.1) 59 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.123.108:9095"}]: dispatch 2026-03-09T18:23:04.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:04 vm08 bash[17774]: audit 2026-03-09T18:23:03.234066+0000 mgr.y (mgr.24335) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.123.108:9095"}]: dispatch 2026-03-09T18:23:04.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:04 vm08 bash[17774]: audit 2026-03-09T18:23:03.238796+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:04.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:04 vm08 bash[17774]: audit 2026-03-09T18:23:03.241674+0000 mon.c (mon.1) 60 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:23:04.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:04 vm08 bash[17774]: audit 2026-03-09T18:23:03.242549+0000 mon.c (mon.1) 61 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:23:04.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:04 vm08 bash[17774]: audit 2026-03-09T18:23:03.243247+0000 mon.c (mon.1) 62 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:23:04.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:04 vm08 bash[17774]: audit 2026-03-09T18:23:03.500317+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:04.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:04 vm08 bash[17774]: cluster 2026-03-09T18:23:03.738047+0000 mon.a (mon.0) 602 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T18:23:04.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:04 vm08 bash[17774]: audit 2026-03-09T18:23:03.739725+0000 mon.a (mon.0) 603 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]: dispatch 2026-03-09T18:23:04.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:04 vm08 bash[17774]: audit 2026-03-09T18:23:03.741206+0000 mon.b (mon.2) 34 : audit [INF] from='client.? 192.168.123.100:0/4100813143' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]: dispatch 2026-03-09T18:23:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:04 vm00 bash[22468]: audit 2026-03-09T18:23:03.194519+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:04 vm00 bash[22468]: audit 2026-03-09T18:23:03.196942+0000 mon.c (mon.1) 54 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T18:23:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:04 vm00 bash[22468]: audit 2026-03-09T18:23:03.197463+0000 mgr.y (mgr.24335) 44 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T18:23:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:04 vm00 bash[22468]: audit 2026-03-09T18:23:03.198233+0000 mon.c (mon.1) 55 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.123.100:9093"}]: dispatch 2026-03-09T18:23:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:04 vm00 bash[22468]: audit 2026-03-09T18:23:03.198562+0000 mgr.y (mgr.24335) 45 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.123.100:9093"}]: dispatch 2026-03-09T18:23:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:04 vm00 bash[22468]: audit 2026-03-09T18:23:03.208063+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:04 vm00 bash[22468]: audit 2026-03-09T18:23:03.213800+0000 mon.c (mon.1) 56 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:04 vm00 bash[22468]: audit 2026-03-09T18:23:03.214347+0000 mgr.y (mgr.24335) 46 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:04 vm00 bash[22468]: audit 2026-03-09T18:23:03.215203+0000 mon.c (mon.1) 57 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.123.108:3000"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:04 vm00 bash[22468]: audit 2026-03-09T18:23:03.215575+0000 mgr.y (mgr.24335) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.123.108:3000"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:04 vm00 bash[22468]: audit 2026-03-09T18:23:03.223646+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:04 vm00 bash[22468]: audit 2026-03-09T18:23:03.228837+0000 mon.c (mon.1) 58 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:04 vm00 bash[22468]: audit 2026-03-09T18:23:03.229403+0000 mgr.y (mgr.24335) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:04 vm00 bash[22468]: audit 2026-03-09T18:23:03.233554+0000 mon.c (mon.1) 59 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.123.108:9095"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:04 vm00 bash[22468]: audit 2026-03-09T18:23:03.234066+0000 mgr.y (mgr.24335) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.123.108:9095"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:04 vm00 bash[22468]: audit 2026-03-09T18:23:03.238796+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:04 vm00 bash[22468]: audit 2026-03-09T18:23:03.241674+0000 mon.c (mon.1) 60 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:04 vm00 bash[22468]: audit 2026-03-09T18:23:03.242549+0000 mon.c (mon.1) 61 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:04 vm00 bash[22468]: audit 2026-03-09T18:23:03.243247+0000 mon.c (mon.1) 62 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:04 vm00 bash[22468]: audit 2026-03-09T18:23:03.500317+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:04 vm00 bash[22468]: cluster 2026-03-09T18:23:03.738047+0000 mon.a (mon.0) 602 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:04 vm00 bash[22468]: audit 2026-03-09T18:23:03.739725+0000 mon.a (mon.0) 603 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:04 vm00 bash[22468]: audit 2026-03-09T18:23:03.741206+0000 mon.b (mon.2) 34 : audit [INF] from='client.? 192.168.123.100:0/4100813143' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:04 vm00 bash[17468]: audit 2026-03-09T18:23:03.194519+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:04 vm00 bash[17468]: audit 2026-03-09T18:23:03.196942+0000 mon.c (mon.1) 54 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:04 vm00 bash[17468]: audit 2026-03-09T18:23:03.197463+0000 mgr.y (mgr.24335) 44 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:04 vm00 bash[17468]: audit 2026-03-09T18:23:03.198233+0000 mon.c (mon.1) 55 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.123.100:9093"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:04 vm00 bash[17468]: audit 2026-03-09T18:23:03.198562+0000 mgr.y (mgr.24335) 45 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.123.100:9093"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:04 vm00 bash[17468]: audit 2026-03-09T18:23:03.208063+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:04 vm00 bash[17468]: audit 2026-03-09T18:23:03.213800+0000 mon.c (mon.1) 56 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:04 vm00 bash[17468]: audit 2026-03-09T18:23:03.214347+0000 mgr.y (mgr.24335) 46 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:04 vm00 bash[17468]: audit 2026-03-09T18:23:03.215203+0000 mon.c (mon.1) 57 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.123.108:3000"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:04 vm00 bash[17468]: audit 2026-03-09T18:23:03.215575+0000 mgr.y (mgr.24335) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.123.108:3000"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:04 vm00 bash[17468]: audit 2026-03-09T18:23:03.223646+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:04 vm00 bash[17468]: audit 2026-03-09T18:23:03.228837+0000 mon.c (mon.1) 58 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:04 vm00 bash[17468]: audit 2026-03-09T18:23:03.229403+0000 mgr.y (mgr.24335) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:04 vm00 bash[17468]: audit 2026-03-09T18:23:03.233554+0000 mon.c (mon.1) 59 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.123.108:9095"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:04 vm00 bash[17468]: audit 2026-03-09T18:23:03.234066+0000 mgr.y (mgr.24335) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.123.108:9095"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:04 vm00 bash[17468]: audit 2026-03-09T18:23:03.238796+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:04 vm00 bash[17468]: audit 2026-03-09T18:23:03.241674+0000 mon.c (mon.1) 60 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:04 vm00 bash[17468]: audit 2026-03-09T18:23:03.242549+0000 mon.c (mon.1) 61 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:04 vm00 bash[17468]: audit 2026-03-09T18:23:03.243247+0000 mon.c (mon.1) 62 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:04 vm00 bash[17468]: audit 2026-03-09T18:23:03.500317+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:04 vm00 bash[17468]: cluster 2026-03-09T18:23:03.738047+0000 mon.a (mon.0) 602 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:04 vm00 bash[17468]: audit 2026-03-09T18:23:03.739725+0000 mon.a (mon.0) 603 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]: dispatch 2026-03-09T18:23:04.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:04 vm00 bash[17468]: audit 2026-03-09T18:23:03.741206+0000 mon.b (mon.2) 34 : audit [INF] from='client.? 192.168.123.100:0/4100813143' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]: dispatch 2026-03-09T18:23:04.975 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:04 vm08 bash[33963]: ts=2026-03-09T18:23:04.696Z caller=head.go:604 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=1 2026-03-09T18:23:04.975 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:04 vm08 bash[33963]: ts=2026-03-09T18:23:04.697Z caller=head.go:604 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=1 2026-03-09T18:23:04.975 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:04 vm08 bash[33963]: ts=2026-03-09T18:23:04.697Z caller=head.go:610 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=23.424µs wal_replay_duration=1.352026075s total_replay_duration=1.35206098s 2026-03-09T18:23:04.975 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:04 vm08 bash[33963]: ts=2026-03-09T18:23:04.698Z caller=main.go:944 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-09T18:23:04.975 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:04 vm08 bash[33963]: ts=2026-03-09T18:23:04.698Z caller=main.go:947 level=info msg="TSDB started" 2026-03-09T18:23:04.975 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:04 vm08 bash[33963]: ts=2026-03-09T18:23:04.698Z caller=main.go:1128 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-09T18:23:04.975 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:04 vm08 bash[33963]: ts=2026-03-09T18:23:04.713Z caller=main.go:1165 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=14.324037ms db_storage=752ns remote_storage=1.253µs web_handler=511ns query_engine=691ns scrape=843.578µs scrape_sd=24.536µs notify=21.09µs notify_sd=3.868µs rules=12.945597ms 2026-03-09T18:23:04.975 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:04 vm08 bash[33963]: ts=2026-03-09T18:23:04.713Z caller=main.go:896 level=info msg="Server is ready to receive web requests." 2026-03-09T18:23:05.134 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:04 vm00 bash[42815]: level=info ts=2026-03-09T18:23:04.826Z caller=cluster.go:696 component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000218938s 2026-03-09T18:23:06.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:05 vm00 bash[17468]: cluster 2026-03-09T18:23:04.429786+0000 mgr.y (mgr.24335) 50 : cluster [DBG] pgmap v25: 65 pgs: 12 creating+peering, 6 active+clean, 47 unknown; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:23:06.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:05 vm00 bash[17468]: audit 2026-03-09T18:23:04.731005+0000 mon.a (mon.0) 604 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]': finished 2026-03-09T18:23:06.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:05 vm00 bash[17468]: cluster 2026-03-09T18:23:04.732304+0000 mon.a (mon.0) 605 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T18:23:06.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:05 vm00 bash[22468]: cluster 2026-03-09T18:23:04.429786+0000 mgr.y (mgr.24335) 50 : cluster [DBG] pgmap v25: 65 pgs: 12 creating+peering, 6 active+clean, 47 unknown; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:23:06.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:05 vm00 bash[22468]: audit 2026-03-09T18:23:04.731005+0000 mon.a (mon.0) 604 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]': finished 2026-03-09T18:23:06.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:05 vm00 bash[22468]: cluster 2026-03-09T18:23:04.732304+0000 mon.a (mon.0) 605 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T18:23:06.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:05 vm08 bash[17774]: cluster 2026-03-09T18:23:04.429786+0000 mgr.y (mgr.24335) 50 : cluster [DBG] pgmap v25: 65 pgs: 12 creating+peering, 6 active+clean, 47 unknown; 449 KiB data, 49 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:23:06.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:05 vm08 bash[17774]: audit 2026-03-09T18:23:04.731005+0000 mon.a (mon.0) 604 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]': finished 2026-03-09T18:23:06.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:05 vm08 bash[17774]: cluster 2026-03-09T18:23:04.732304+0000 mon.a (mon.0) 605 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T18:23:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:06 vm00 bash[22468]: cluster 2026-03-09T18:23:05.743746+0000 mon.a (mon.0) 606 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T18:23:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:06 vm00 bash[22468]: audit 2026-03-09T18:23:05.749842+0000 mon.b (mon.2) 35 : audit [INF] from='client.? 192.168.123.100:0/4100813143' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]: dispatch 2026-03-09T18:23:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:06 vm00 bash[22468]: audit 2026-03-09T18:23:05.752240+0000 mon.a (mon.0) 607 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]: dispatch 2026-03-09T18:23:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:06 vm00 bash[22468]: audit 2026-03-09T18:23:06.299915+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:06 vm00 bash[22468]: audit 2026-03-09T18:23:06.362067+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:06 vm00 bash[22468]: audit 2026-03-09T18:23:06.377563+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:06 vm00 bash[22468]: audit 2026-03-09T18:23:06.380806+0000 mon.c (mon.1) 63 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:23:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:06 vm00 bash[22468]: audit 2026-03-09T18:23:06.381594+0000 mon.c (mon.1) 64 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:23:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:06 vm00 bash[22468]: audit 2026-03-09T18:23:06.382147+0000 mon.c (mon.1) 65 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:23:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:06 vm00 bash[22468]: audit 2026-03-09T18:23:06.740666+0000 mon.a (mon.0) 611 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]': finished 2026-03-09T18:23:07.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:06 vm00 bash[22468]: cluster 2026-03-09T18:23:06.740808+0000 mon.a (mon.0) 612 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T18:23:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:06 vm00 bash[17468]: cluster 2026-03-09T18:23:05.743746+0000 mon.a (mon.0) 606 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T18:23:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:06 vm00 bash[17468]: audit 2026-03-09T18:23:05.749842+0000 mon.b (mon.2) 35 : audit [INF] from='client.? 192.168.123.100:0/4100813143' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]: dispatch 2026-03-09T18:23:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:06 vm00 bash[17468]: audit 2026-03-09T18:23:05.752240+0000 mon.a (mon.0) 607 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]: dispatch 2026-03-09T18:23:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:06 vm00 bash[17468]: audit 2026-03-09T18:23:06.299915+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:06 vm00 bash[17468]: audit 2026-03-09T18:23:06.362067+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:06 vm00 bash[17468]: audit 2026-03-09T18:23:06.377563+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:06 vm00 bash[17468]: audit 2026-03-09T18:23:06.380806+0000 mon.c (mon.1) 63 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:23:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:06 vm00 bash[17468]: audit 2026-03-09T18:23:06.381594+0000 mon.c (mon.1) 64 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:23:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:06 vm00 bash[17468]: audit 2026-03-09T18:23:06.382147+0000 mon.c (mon.1) 65 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:23:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:06 vm00 bash[17468]: audit 2026-03-09T18:23:06.740666+0000 mon.a (mon.0) 611 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]': finished 2026-03-09T18:23:07.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:06 vm00 bash[17468]: cluster 2026-03-09T18:23:06.740808+0000 mon.a (mon.0) 612 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T18:23:07.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:06 vm08 bash[17774]: cluster 2026-03-09T18:23:05.743746+0000 mon.a (mon.0) 606 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T18:23:07.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:06 vm08 bash[17774]: audit 2026-03-09T18:23:05.749842+0000 mon.b (mon.2) 35 : audit [INF] from='client.? 192.168.123.100:0/4100813143' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]: dispatch 2026-03-09T18:23:07.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:06 vm08 bash[17774]: audit 2026-03-09T18:23:05.752240+0000 mon.a (mon.0) 607 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]: dispatch 2026-03-09T18:23:07.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:06 vm08 bash[17774]: audit 2026-03-09T18:23:06.299915+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:07.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:06 vm08 bash[17774]: audit 2026-03-09T18:23:06.362067+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:07.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:06 vm08 bash[17774]: audit 2026-03-09T18:23:06.377563+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:07.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:06 vm08 bash[17774]: audit 2026-03-09T18:23:06.380806+0000 mon.c (mon.1) 63 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:23:07.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:06 vm08 bash[17774]: audit 2026-03-09T18:23:06.381594+0000 mon.c (mon.1) 64 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:23:07.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:06 vm08 bash[17774]: audit 2026-03-09T18:23:06.382147+0000 mon.c (mon.1) 65 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:23:07.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:06 vm08 bash[17774]: audit 2026-03-09T18:23:06.740666+0000 mon.a (mon.0) 611 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]': finished 2026-03-09T18:23:07.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:06 vm08 bash[17774]: cluster 2026-03-09T18:23:06.740808+0000 mon.a (mon.0) 612 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T18:23:08.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:07 vm00 bash[22468]: cluster 2026-03-09T18:23:06.430169+0000 mgr.y (mgr.24335) 51 : cluster [DBG] pgmap v28: 97 pgs: 32 unknown, 32 creating+peering, 33 active+clean; 451 KiB data, 51 MiB used, 160 GiB / 160 GiB avail; 6.8 KiB/s rd, 3.6 KiB/s wr, 11 op/s 2026-03-09T18:23:08.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:07 vm00 bash[22468]: cluster 2026-03-09T18:23:07.752172+0000 mon.a (mon.0) 613 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T18:23:08.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:07 vm00 bash[22468]: audit 2026-03-09T18:23:07.760712+0000 mon.a (mon.0) 614 : audit [INF] from='client.? 192.168.123.100:0/463817825' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T18:23:08.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:07 vm00 bash[17468]: cluster 2026-03-09T18:23:06.430169+0000 mgr.y (mgr.24335) 51 : cluster [DBG] pgmap v28: 97 pgs: 32 unknown, 32 creating+peering, 33 active+clean; 451 KiB data, 51 MiB used, 160 GiB / 160 GiB avail; 6.8 KiB/s rd, 3.6 KiB/s wr, 11 op/s 2026-03-09T18:23:08.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:07 vm00 bash[17468]: cluster 2026-03-09T18:23:07.752172+0000 mon.a (mon.0) 613 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T18:23:08.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:07 vm00 bash[17468]: audit 2026-03-09T18:23:07.760712+0000 mon.a (mon.0) 614 : audit [INF] from='client.? 192.168.123.100:0/463817825' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T18:23:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:07 vm08 bash[17774]: cluster 2026-03-09T18:23:06.430169+0000 mgr.y (mgr.24335) 51 : cluster [DBG] pgmap v28: 97 pgs: 32 unknown, 32 creating+peering, 33 active+clean; 451 KiB data, 51 MiB used, 160 GiB / 160 GiB avail; 6.8 KiB/s rd, 3.6 KiB/s wr, 11 op/s 2026-03-09T18:23:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:07 vm08 bash[17774]: cluster 2026-03-09T18:23:07.752172+0000 mon.a (mon.0) 613 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T18:23:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:07 vm08 bash[17774]: audit 2026-03-09T18:23:07.760712+0000 mon.a (mon.0) 614 : audit [INF] from='client.? 192.168.123.100:0/463817825' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T18:23:10.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:09 vm00 bash[22468]: cluster 2026-03-09T18:23:08.430617+0000 mgr.y (mgr.24335) 52 : cluster [DBG] pgmap v31: 129 pgs: 64 unknown, 32 creating+peering, 33 active+clean; 451 KiB data, 51 MiB used, 160 GiB / 160 GiB avail; 7.5 KiB/s rd, 4.0 KiB/s wr, 12 op/s 2026-03-09T18:23:10.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:09 vm00 bash[22468]: audit 2026-03-09T18:23:08.780317+0000 mon.a (mon.0) 615 : audit [INF] from='client.? 192.168.123.100:0/463817825' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.meta","app": "rgw"}]': finished 2026-03-09T18:23:10.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:09 vm00 bash[22468]: cluster 2026-03-09T18:23:08.780778+0000 mon.a (mon.0) 616 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T18:23:10.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:09 vm00 bash[22468]: audit 2026-03-09T18:23:08.806601+0000 mon.a (mon.0) 617 : audit [INF] from='client.? 192.168.123.100:0/463817825' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T18:23:10.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:09 vm00 bash[22468]: audit 2026-03-09T18:23:09.432048+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:10.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:09 vm00 bash[22468]: audit 2026-03-09T18:23:09.518302+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:10.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:09 vm00 bash[22468]: audit 2026-03-09T18:23:09.524975+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:10.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:09 vm00 bash[17468]: cluster 2026-03-09T18:23:08.430617+0000 mgr.y (mgr.24335) 52 : cluster [DBG] pgmap v31: 129 pgs: 64 unknown, 32 creating+peering, 33 active+clean; 451 KiB data, 51 MiB used, 160 GiB / 160 GiB avail; 7.5 KiB/s rd, 4.0 KiB/s wr, 12 op/s 2026-03-09T18:23:10.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:09 vm00 bash[17468]: audit 2026-03-09T18:23:08.780317+0000 mon.a (mon.0) 615 : audit [INF] from='client.? 192.168.123.100:0/463817825' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.meta","app": "rgw"}]': finished 2026-03-09T18:23:10.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:09 vm00 bash[17468]: cluster 2026-03-09T18:23:08.780778+0000 mon.a (mon.0) 616 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T18:23:10.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:09 vm00 bash[17468]: audit 2026-03-09T18:23:08.806601+0000 mon.a (mon.0) 617 : audit [INF] from='client.? 192.168.123.100:0/463817825' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T18:23:10.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:09 vm00 bash[17468]: audit 2026-03-09T18:23:09.432048+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:10.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:09 vm00 bash[17468]: audit 2026-03-09T18:23:09.518302+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:10.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:09 vm00 bash[17468]: audit 2026-03-09T18:23:09.524975+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:10.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:09 vm08 bash[17774]: cluster 2026-03-09T18:23:08.430617+0000 mgr.y (mgr.24335) 52 : cluster [DBG] pgmap v31: 129 pgs: 64 unknown, 32 creating+peering, 33 active+clean; 451 KiB data, 51 MiB used, 160 GiB / 160 GiB avail; 7.5 KiB/s rd, 4.0 KiB/s wr, 12 op/s 2026-03-09T18:23:10.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:09 vm08 bash[17774]: audit 2026-03-09T18:23:08.780317+0000 mon.a (mon.0) 615 : audit [INF] from='client.? 192.168.123.100:0/463817825' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.meta","app": "rgw"}]': finished 2026-03-09T18:23:10.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:09 vm08 bash[17774]: cluster 2026-03-09T18:23:08.780778+0000 mon.a (mon.0) 616 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T18:23:10.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:09 vm08 bash[17774]: audit 2026-03-09T18:23:08.806601+0000 mon.a (mon.0) 617 : audit [INF] from='client.? 192.168.123.100:0/463817825' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T18:23:10.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:09 vm08 bash[17774]: audit 2026-03-09T18:23:09.432048+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:10.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:09 vm08 bash[17774]: audit 2026-03-09T18:23:09.518302+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:10.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:09 vm08 bash[17774]: audit 2026-03-09T18:23:09.524975+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:10.997 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:23:10.997 INFO:teuthology.orchestra.run.vm00.stdout: "id": "e06f59e6-d2a8-4d8c-b2c8-178c029715a4", 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "epoch": 1, 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "predecessor_uuid": "ddd57d0f-a0eb-4800-b8c9-c537683d4528", 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "sync_status": [], 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "period_map": { 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "id": "e06f59e6-d2a8-4d8c-b2c8-178c029715a4", 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "zonegroups": [ 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: { 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "id": "5cd25fd4-08df-44b3-b5a9-69d0c89b07a9", 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "name": "default", 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "api_name": "default", 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "is_master": "true", 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "endpoints": [], 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "hostnames": [], 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "hostnames_s3website": [], 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "master_zone": "2acaa3ce-a2fb-4c65-8c73-331e5c0e6ec5", 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "zones": [ 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: { 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "id": "2acaa3ce-a2fb-4c65-8c73-331e5c0e6ec5", 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "name": "z", 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "endpoints": [], 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "log_meta": "false", 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "log_data": "false", 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "bucket_index_max_shards": 11, 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "read_only": "false", 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "tier_type": "", 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "sync_from_all": "true", 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "sync_from": [], 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "redirect_zone": "" 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: ], 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "placement_targets": [ 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: { 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "name": "default-placement", 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "tags": [], 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "storage_classes": [ 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "STANDARD" 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: ] 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: ], 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "default_placement": "default-placement", 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "realm_id": "07f33316-d515-457f-9f45-9e88bd1f5261", 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "sync_policy": { 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "groups": [] 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: ], 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "short_zone_ids": [ 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: { 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "key": "2acaa3ce-a2fb-4c65-8c73-331e5c0e6ec5", 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "val": 139740157 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: ] 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "master_zonegroup": "5cd25fd4-08df-44b3-b5a9-69d0c89b07a9", 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "master_zone": "2acaa3ce-a2fb-4c65-8c73-331e5c0e6ec5", 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "period_config": { 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "bucket_quota": { 2026-03-09T18:23:10.998 INFO:teuthology.orchestra.run.vm00.stdout: "enabled": false, 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "check_on_raw": false, 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "max_size": -1, 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "max_size_kb": 0, 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "max_objects": -1 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "user_quota": { 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "enabled": false, 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "check_on_raw": false, 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "max_size": -1, 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "max_size_kb": 0, 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "max_objects": -1 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "user_ratelimit": { 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "max_read_ops": 0, 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "max_write_ops": 0, 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "max_read_bytes": 0, 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "max_write_bytes": 0, 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "enabled": false 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "bucket_ratelimit": { 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "max_read_ops": 0, 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "max_write_ops": 0, 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "max_read_bytes": 0, 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "max_write_bytes": 0, 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "enabled": false 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "anonymous_ratelimit": { 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "max_read_ops": 0, 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "max_write_ops": 0, 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "max_read_bytes": 0, 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "max_write_bytes": 0, 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "enabled": false 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "realm_id": "07f33316-d515-457f-9f45-9e88bd1f5261", 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "realm_name": "r", 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout: "realm_epoch": 2 2026-03-09T18:23:10.999 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:23:11.073 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:10 vm00 bash[17468]: audit 2026-03-09T18:23:09.784739+0000 mon.a (mon.0) 621 : audit [INF] from='client.? 192.168.123.100:0/463817825' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T18:23:11.073 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:10 vm00 bash[17468]: cluster 2026-03-09T18:23:09.785052+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T18:23:11.073 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:10 vm00 bash[17468]: audit 2026-03-09T18:23:09.788226+0000 mon.a (mon.0) 623 : audit [INF] from='client.? 192.168.123.100:0/463817825' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_num_min", "val": "8"}]: dispatch 2026-03-09T18:23:11.073 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:10 vm00 bash[22468]: audit 2026-03-09T18:23:09.784739+0000 mon.a (mon.0) 621 : audit [INF] from='client.? 192.168.123.100:0/463817825' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T18:23:11.073 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:10 vm00 bash[22468]: cluster 2026-03-09T18:23:09.785052+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T18:23:11.073 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:10 vm00 bash[22468]: audit 2026-03-09T18:23:09.788226+0000 mon.a (mon.0) 623 : audit [INF] from='client.? 192.168.123.100:0/463817825' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_num_min", "val": "8"}]: dispatch 2026-03-09T18:23:11.074 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch apply rgw foo --realm r --zone z --placement=2 --port=8000' 2026-03-09T18:23:11.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:10 vm08 bash[17774]: audit 2026-03-09T18:23:09.784739+0000 mon.a (mon.0) 621 : audit [INF] from='client.? 192.168.123.100:0/463817825' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T18:23:11.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:10 vm08 bash[17774]: cluster 2026-03-09T18:23:09.785052+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T18:23:11.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:10 vm08 bash[17774]: audit 2026-03-09T18:23:09.788226+0000 mon.a (mon.0) 623 : audit [INF] from='client.? 192.168.123.100:0/463817825' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_num_min", "val": "8"}]: dispatch 2026-03-09T18:23:11.543 INFO:teuthology.orchestra.run.vm00.stdout:Scheduled rgw.foo update... 2026-03-09T18:23:11.608 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph osd pool create foo' 2026-03-09T18:23:11.866 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:11 vm00 bash[22468]: cluster 2026-03-09T18:23:10.431004+0000 mgr.y (mgr.24335) 53 : cluster [DBG] pgmap v34: 129 pgs: 129 active+clean; 451 KiB data, 53 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T18:23:11.866 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:11 vm00 bash[22468]: audit 2026-03-09T18:23:10.794282+0000 mon.a (mon.0) 624 : audit [INF] from='client.? 192.168.123.100:0/463817825' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_num_min", "val": "8"}]': finished 2026-03-09T18:23:11.866 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:11 vm00 bash[22468]: cluster 2026-03-09T18:23:10.794453+0000 mon.a (mon.0) 625 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T18:23:11.866 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:11 vm00 bash[22468]: audit 2026-03-09T18:23:11.539596+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:11.866 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:11 vm00 bash[22468]: audit 2026-03-09T18:23:11.548497+0000 mon.c (mon.1) 66 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:23:11.866 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:11 vm00 bash[22468]: audit 2026-03-09T18:23:11.549773+0000 mon.c (mon.1) 67 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:23:11.866 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:11 vm00 bash[22468]: audit 2026-03-09T18:23:11.550588+0000 mon.c (mon.1) 68 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:23:11.866 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:11 vm00 bash[17468]: cluster 2026-03-09T18:23:10.431004+0000 mgr.y (mgr.24335) 53 : cluster [DBG] pgmap v34: 129 pgs: 129 active+clean; 451 KiB data, 53 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T18:23:11.867 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:11 vm00 bash[17468]: audit 2026-03-09T18:23:10.794282+0000 mon.a (mon.0) 624 : audit [INF] from='client.? 192.168.123.100:0/463817825' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_num_min", "val": "8"}]': finished 2026-03-09T18:23:11.867 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:11 vm00 bash[17468]: cluster 2026-03-09T18:23:10.794453+0000 mon.a (mon.0) 625 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T18:23:11.867 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:11 vm00 bash[17468]: audit 2026-03-09T18:23:11.539596+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:11.867 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:11 vm00 bash[17468]: audit 2026-03-09T18:23:11.548497+0000 mon.c (mon.1) 66 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:23:11.867 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:11 vm00 bash[17468]: audit 2026-03-09T18:23:11.549773+0000 mon.c (mon.1) 67 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:23:11.867 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:11 vm00 bash[17468]: audit 2026-03-09T18:23:11.550588+0000 mon.c (mon.1) 68 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:23:12.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:11 vm08 bash[17774]: cluster 2026-03-09T18:23:10.431004+0000 mgr.y (mgr.24335) 53 : cluster [DBG] pgmap v34: 129 pgs: 129 active+clean; 451 KiB data, 53 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T18:23:12.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:11 vm08 bash[17774]: audit 2026-03-09T18:23:10.794282+0000 mon.a (mon.0) 624 : audit [INF] from='client.? 192.168.123.100:0/463817825' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_num_min", "val": "8"}]': finished 2026-03-09T18:23:12.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:11 vm08 bash[17774]: cluster 2026-03-09T18:23:10.794453+0000 mon.a (mon.0) 625 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T18:23:12.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:11 vm08 bash[17774]: audit 2026-03-09T18:23:11.539596+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:12.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:11 vm08 bash[17774]: audit 2026-03-09T18:23:11.548497+0000 mon.c (mon.1) 66 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:23:12.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:11 vm08 bash[17774]: audit 2026-03-09T18:23:11.549773+0000 mon.c (mon.1) 67 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:23:12.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:11 vm08 bash[17774]: audit 2026-03-09T18:23:11.550588+0000 mon.c (mon.1) 68 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:23:12.855 INFO:teuthology.orchestra.run.vm00.stderr:pool 'foo' created 2026-03-09T18:23:12.910 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'rbd pool init foo' 2026-03-09T18:23:13.083 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:12 vm00 bash[22468]: audit 2026-03-09T18:23:11.533057+0000 mgr.y (mgr.24335) 54 : audit [DBG] from='client.24496 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo", "realm": "r", "zone": "z", "placement": "2", "port": 8000, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:23:13.083 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:12 vm00 bash[22468]: cephadm 2026-03-09T18:23:11.534407+0000 mgr.y (mgr.24335) 55 : cephadm [INF] Saving service rgw.foo spec with placement count:2 2026-03-09T18:23:13.083 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:12 vm00 bash[22468]: audit 2026-03-09T18:23:12.052988+0000 mon.c (mon.1) 69 : audit [INF] from='client.? 192.168.123.100:0/413886353' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "foo"}]: dispatch 2026-03-09T18:23:13.083 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:12 vm00 bash[22468]: audit 2026-03-09T18:23:12.053817+0000 mon.a (mon.0) 627 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "foo"}]: dispatch 2026-03-09T18:23:13.083 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:12 vm00 bash[17468]: audit 2026-03-09T18:23:11.533057+0000 mgr.y (mgr.24335) 54 : audit [DBG] from='client.24496 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo", "realm": "r", "zone": "z", "placement": "2", "port": 8000, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:23:13.083 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:12 vm00 bash[17468]: cephadm 2026-03-09T18:23:11.534407+0000 mgr.y (mgr.24335) 55 : cephadm [INF] Saving service rgw.foo spec with placement count:2 2026-03-09T18:23:13.083 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:12 vm00 bash[17468]: audit 2026-03-09T18:23:12.052988+0000 mon.c (mon.1) 69 : audit [INF] from='client.? 192.168.123.100:0/413886353' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "foo"}]: dispatch 2026-03-09T18:23:13.084 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:12 vm00 bash[17468]: audit 2026-03-09T18:23:12.053817+0000 mon.a (mon.0) 627 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "foo"}]: dispatch 2026-03-09T18:23:13.084 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:12 vm00 bash[42815]: level=info ts=2026-03-09T18:23:12.836Z caller=cluster.go:688 component=cluster msg="gossip settled; proceeding" elapsed=10.010084697s 2026-03-09T18:23:13.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:12 vm08 bash[17774]: audit 2026-03-09T18:23:11.533057+0000 mgr.y (mgr.24335) 54 : audit [DBG] from='client.24496 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo", "realm": "r", "zone": "z", "placement": "2", "port": 8000, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:23:13.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:12 vm08 bash[17774]: cephadm 2026-03-09T18:23:11.534407+0000 mgr.y (mgr.24335) 55 : cephadm [INF] Saving service rgw.foo spec with placement count:2 2026-03-09T18:23:13.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:12 vm08 bash[17774]: audit 2026-03-09T18:23:12.052988+0000 mon.c (mon.1) 69 : audit [INF] from='client.? 192.168.123.100:0/413886353' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "foo"}]: dispatch 2026-03-09T18:23:13.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:12 vm08 bash[17774]: audit 2026-03-09T18:23:12.053817+0000 mon.a (mon.0) 627 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "foo"}]: dispatch 2026-03-09T18:23:14.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:13 vm00 bash[22468]: cluster 2026-03-09T18:23:12.431400+0000 mgr.y (mgr.24335) 56 : cluster [DBG] pgmap v36: 129 pgs: 129 active+clean; 451 KiB data, 53 MiB used, 160 GiB / 160 GiB avail; 218 B/s rd, 437 B/s wr, 1 op/s 2026-03-09T18:23:14.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:13 vm00 bash[22468]: audit 2026-03-09T18:23:12.847356+0000 mon.a (mon.0) 628 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "foo"}]': finished 2026-03-09T18:23:14.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:13 vm00 bash[22468]: cluster 2026-03-09T18:23:12.847510+0000 mon.a (mon.0) 629 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T18:23:14.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:13 vm00 bash[22468]: audit 2026-03-09T18:23:13.228224+0000 mon.c (mon.1) 70 : audit [INF] from='client.? 192.168.123.100:0/3383502993' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]: dispatch 2026-03-09T18:23:14.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:13 vm00 bash[22468]: audit 2026-03-09T18:23:13.228548+0000 mon.a (mon.0) 630 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]: dispatch 2026-03-09T18:23:14.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:13 vm00 bash[17468]: cluster 2026-03-09T18:23:12.431400+0000 mgr.y (mgr.24335) 56 : cluster [DBG] pgmap v36: 129 pgs: 129 active+clean; 451 KiB data, 53 MiB used, 160 GiB / 160 GiB avail; 218 B/s rd, 437 B/s wr, 1 op/s 2026-03-09T18:23:14.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:13 vm00 bash[17468]: audit 2026-03-09T18:23:12.847356+0000 mon.a (mon.0) 628 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "foo"}]': finished 2026-03-09T18:23:14.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:13 vm00 bash[17468]: cluster 2026-03-09T18:23:12.847510+0000 mon.a (mon.0) 629 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T18:23:14.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:13 vm00 bash[17468]: audit 2026-03-09T18:23:13.228224+0000 mon.c (mon.1) 70 : audit [INF] from='client.? 192.168.123.100:0/3383502993' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]: dispatch 2026-03-09T18:23:14.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:13 vm00 bash[17468]: audit 2026-03-09T18:23:13.228548+0000 mon.a (mon.0) 630 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]: dispatch 2026-03-09T18:23:14.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:13 vm08 bash[17774]: cluster 2026-03-09T18:23:12.431400+0000 mgr.y (mgr.24335) 56 : cluster [DBG] pgmap v36: 129 pgs: 129 active+clean; 451 KiB data, 53 MiB used, 160 GiB / 160 GiB avail; 218 B/s rd, 437 B/s wr, 1 op/s 2026-03-09T18:23:14.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:13 vm08 bash[17774]: audit 2026-03-09T18:23:12.847356+0000 mon.a (mon.0) 628 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "foo"}]': finished 2026-03-09T18:23:14.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:13 vm08 bash[17774]: cluster 2026-03-09T18:23:12.847510+0000 mon.a (mon.0) 629 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T18:23:14.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:13 vm08 bash[17774]: audit 2026-03-09T18:23:13.228224+0000 mon.c (mon.1) 70 : audit [INF] from='client.? 192.168.123.100:0/3383502993' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]: dispatch 2026-03-09T18:23:14.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:13 vm08 bash[17774]: audit 2026-03-09T18:23:13.228548+0000 mon.a (mon.0) 630 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]: dispatch 2026-03-09T18:23:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:14 vm08 bash[17774]: audit 2026-03-09T18:23:13.843710+0000 mon.a (mon.0) 631 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]': finished 2026-03-09T18:23:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:14 vm08 bash[17774]: cluster 2026-03-09T18:23:13.844037+0000 mon.a (mon.0) 632 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T18:23:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:14 vm08 bash[17774]: audit 2026-03-09T18:23:14.668074+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:14 vm08 bash[17774]: audit 2026-03-09T18:23:14.674535+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:14 vm08 bash[17774]: audit 2026-03-09T18:23:14.680666+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:14 vm08 bash[17774]: audit 2026-03-09T18:23:14.684978+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:14 vm08 bash[17774]: audit 2026-03-09T18:23:14.688930+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:14 vm08 bash[17774]: audit 2026-03-09T18:23:14.689940+0000 mon.c (mon.1) 71 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:23:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:14 vm08 bash[17774]: audit 2026-03-09T18:23:14.690173+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:23:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:14 vm08 bash[17774]: audit 2026-03-09T18:23:14.693027+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T18:23:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:14 vm08 bash[17774]: audit 2026-03-09T18:23:14.697496+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:14.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:14 vm08 bash[17774]: audit 2026-03-09T18:23:14.698963+0000 mon.c (mon.1) 72 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:23:14.980 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:14 vm00 bash[17468]: audit 2026-03-09T18:23:13.843710+0000 mon.a (mon.0) 631 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]': finished 2026-03-09T18:23:14.980 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:14 vm00 bash[17468]: cluster 2026-03-09T18:23:13.844037+0000 mon.a (mon.0) 632 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T18:23:14.980 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:14 vm00 bash[17468]: audit 2026-03-09T18:23:14.668074+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:14.980 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:14 vm00 bash[17468]: audit 2026-03-09T18:23:14.674535+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:14.980 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:14 vm00 bash[17468]: audit 2026-03-09T18:23:14.680666+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:14.980 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:14 vm00 bash[17468]: audit 2026-03-09T18:23:14.684978+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:14.980 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:14 vm00 bash[17468]: audit 2026-03-09T18:23:14.688930+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:14.980 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:14 vm00 bash[17468]: audit 2026-03-09T18:23:14.689940+0000 mon.c (mon.1) 71 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:23:14.980 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:14 vm00 bash[17468]: audit 2026-03-09T18:23:14.690173+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:23:14.980 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:14 vm00 bash[17468]: audit 2026-03-09T18:23:14.693027+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T18:23:14.980 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:14 vm00 bash[17468]: audit 2026-03-09T18:23:14.697496+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:14.980 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:14 vm00 bash[17468]: audit 2026-03-09T18:23:14.698963+0000 mon.c (mon.1) 72 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:23:14.980 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:14 vm00 bash[22468]: audit 2026-03-09T18:23:13.843710+0000 mon.a (mon.0) 631 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]': finished 2026-03-09T18:23:14.980 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:14 vm00 bash[22468]: cluster 2026-03-09T18:23:13.844037+0000 mon.a (mon.0) 632 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T18:23:14.980 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:14 vm00 bash[22468]: audit 2026-03-09T18:23:14.668074+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:14.980 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:14 vm00 bash[22468]: audit 2026-03-09T18:23:14.674535+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:14.980 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:14 vm00 bash[22468]: audit 2026-03-09T18:23:14.680666+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:14.980 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:14 vm00 bash[22468]: audit 2026-03-09T18:23:14.684978+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:14.980 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:14 vm00 bash[22468]: audit 2026-03-09T18:23:14.688930+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:14.980 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:14 vm00 bash[22468]: audit 2026-03-09T18:23:14.689940+0000 mon.c (mon.1) 71 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:23:14.980 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:14 vm00 bash[22468]: audit 2026-03-09T18:23:14.690173+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:23:14.980 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:14 vm00 bash[22468]: audit 2026-03-09T18:23:14.693027+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T18:23:14.981 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:14 vm00 bash[22468]: audit 2026-03-09T18:23:14.697496+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:14.981 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:14 vm00 bash[22468]: audit 2026-03-09T18:23:14.698963+0000 mon.c (mon.1) 72 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:23:15.294 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:15 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:15.294 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:23:15 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:15.295 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:15 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:15.295 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:23:15 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:15.295 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:23:15 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:15.295 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:23:15 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:15.295 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:23:15 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:15.295 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:23:15 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:15.295 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:15 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:15.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:15 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:15.635 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:23:15 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:15.635 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:23:15 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:15.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:15 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:15.635 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:23:15 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:15.635 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:23:15 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:15.635 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:23:15 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:15.635 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:23:15 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:15.636 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:15 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:15.979 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch apply iscsi foo u p' 2026-03-09T18:23:16.057 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:23:15 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:16.058 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:23:15 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:16.058 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:23:15 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:16.058 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:23:15 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:16.058 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:15 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:16.058 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:23:15 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:16.058 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:15 vm08 bash[17774]: cluster 2026-03-09T18:23:14.432029+0000 mgr.y (mgr.24335) 57 : cluster [DBG] pgmap v39: 161 pgs: 25 unknown, 136 active+clean; 451 KiB data, 53 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:23:16.058 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:15 vm08 bash[17774]: cephadm 2026-03-09T18:23:14.686101+0000 mgr.y (mgr.24335) 58 : cephadm [INF] Saving service rgw.foo spec with placement count:2 2026-03-09T18:23:16.058 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:15 vm08 bash[17774]: cephadm 2026-03-09T18:23:14.699494+0000 mgr.y (mgr.24335) 59 : cephadm [INF] Deploying daemon rgw.foo.vm00.ygjynr on vm00 2026-03-09T18:23:16.058 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:15 vm08 bash[17774]: cluster 2026-03-09T18:23:14.868410+0000 mon.a (mon.0) 641 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T18:23:16.058 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:15 vm08 bash[17774]: audit 2026-03-09T18:23:15.415973+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:16.058 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:15 vm08 bash[17774]: audit 2026-03-09T18:23:15.420863+0000 mon.c (mon.1) 73 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:23:16.058 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:15 vm08 bash[17774]: audit 2026-03-09T18:23:15.421456+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:23:16.058 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:15 vm08 bash[17774]: audit 2026-03-09T18:23:15.425662+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T18:23:16.058 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:15 vm08 bash[17774]: audit 2026-03-09T18:23:15.445990+0000 mon.a (mon.0) 645 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:16.058 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:15 vm08 bash[17774]: audit 2026-03-09T18:23:15.450561+0000 mon.c (mon.1) 74 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:23:16.058 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:15 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:16.058 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:23:15 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:16.058 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:23:15 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:16.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:15 vm00 bash[17468]: cluster 2026-03-09T18:23:14.432029+0000 mgr.y (mgr.24335) 57 : cluster [DBG] pgmap v39: 161 pgs: 25 unknown, 136 active+clean; 451 KiB data, 53 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:23:16.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:15 vm00 bash[17468]: cephadm 2026-03-09T18:23:14.686101+0000 mgr.y (mgr.24335) 58 : cephadm [INF] Saving service rgw.foo spec with placement count:2 2026-03-09T18:23:16.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:15 vm00 bash[17468]: cephadm 2026-03-09T18:23:14.699494+0000 mgr.y (mgr.24335) 59 : cephadm [INF] Deploying daemon rgw.foo.vm00.ygjynr on vm00 2026-03-09T18:23:16.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:15 vm00 bash[17468]: cluster 2026-03-09T18:23:14.868410+0000 mon.a (mon.0) 641 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T18:23:16.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:15 vm00 bash[17468]: audit 2026-03-09T18:23:15.415973+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:16.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:15 vm00 bash[17468]: audit 2026-03-09T18:23:15.420863+0000 mon.c (mon.1) 73 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:23:16.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:15 vm00 bash[17468]: audit 2026-03-09T18:23:15.421456+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:23:16.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:15 vm00 bash[17468]: audit 2026-03-09T18:23:15.425662+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T18:23:16.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:15 vm00 bash[17468]: audit 2026-03-09T18:23:15.445990+0000 mon.a (mon.0) 645 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:16.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:15 vm00 bash[17468]: audit 2026-03-09T18:23:15.450561+0000 mon.c (mon.1) 74 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:23:16.135 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:23:15 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:23:15] "GET /metrics HTTP/1.1" 200 197398 "" "Prometheus/2.33.4" 2026-03-09T18:23:16.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:15 vm00 bash[22468]: cluster 2026-03-09T18:23:14.432029+0000 mgr.y (mgr.24335) 57 : cluster [DBG] pgmap v39: 161 pgs: 25 unknown, 136 active+clean; 451 KiB data, 53 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:23:16.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:15 vm00 bash[22468]: cephadm 2026-03-09T18:23:14.686101+0000 mgr.y (mgr.24335) 58 : cephadm [INF] Saving service rgw.foo spec with placement count:2 2026-03-09T18:23:16.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:15 vm00 bash[22468]: cephadm 2026-03-09T18:23:14.699494+0000 mgr.y (mgr.24335) 59 : cephadm [INF] Deploying daemon rgw.foo.vm00.ygjynr on vm00 2026-03-09T18:23:16.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:15 vm00 bash[22468]: cluster 2026-03-09T18:23:14.868410+0000 mon.a (mon.0) 641 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T18:23:16.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:15 vm00 bash[22468]: audit 2026-03-09T18:23:15.415973+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:16.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:15 vm00 bash[22468]: audit 2026-03-09T18:23:15.420863+0000 mon.c (mon.1) 73 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:23:16.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:15 vm00 bash[22468]: audit 2026-03-09T18:23:15.421456+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:23:16.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:15 vm00 bash[22468]: audit 2026-03-09T18:23:15.425662+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T18:23:16.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:15 vm00 bash[22468]: audit 2026-03-09T18:23:15.445990+0000 mon.a (mon.0) 645 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:16.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:15 vm00 bash[22468]: audit 2026-03-09T18:23:15.450561+0000 mon.c (mon.1) 74 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:23:16.328 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:23:16 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:16.328 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:23:16 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:16.328 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:23:16 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:16.329 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:16 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:16.329 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:23:16 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:16.329 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:23:16 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:16.329 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:23:16 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:16.329 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:23:16 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:16.330 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:23:16 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:16.577 INFO:teuthology.orchestra.run.vm00.stdout:Scheduled iscsi.foo update... 2026-03-09T18:23:16.672 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'sleep 180' 2026-03-09T18:23:16.930 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:16 vm00 bash[17468]: cephadm 2026-03-09T18:23:15.451983+0000 mgr.y (mgr.24335) 60 : cephadm [INF] Deploying daemon rgw.foo.vm08.rcuedn on vm08 2026-03-09T18:23:16.930 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:16 vm00 bash[17468]: cluster 2026-03-09T18:23:15.878874+0000 mon.a (mon.0) 646 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-09T18:23:16.930 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:16 vm00 bash[17468]: audit 2026-03-09T18:23:16.313049+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:16.930 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:16 vm00 bash[17468]: audit 2026-03-09T18:23:16.316654+0000 mon.c (mon.1) 75 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:23:16.930 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:16 vm00 bash[17468]: audit 2026-03-09T18:23:16.317673+0000 mon.c (mon.1) 76 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:23:16.930 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:16 vm00 bash[17468]: audit 2026-03-09T18:23:16.318433+0000 mon.c (mon.1) 77 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:23:16.930 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:16 vm00 bash[17468]: audit 2026-03-09T18:23:16.572014+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:16.930 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:16 vm00 bash[22468]: cephadm 2026-03-09T18:23:15.451983+0000 mgr.y (mgr.24335) 60 : cephadm [INF] Deploying daemon rgw.foo.vm08.rcuedn on vm08 2026-03-09T18:23:16.930 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:16 vm00 bash[22468]: cluster 2026-03-09T18:23:15.878874+0000 mon.a (mon.0) 646 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-09T18:23:16.930 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:16 vm00 bash[22468]: audit 2026-03-09T18:23:16.313049+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:16.930 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:16 vm00 bash[22468]: audit 2026-03-09T18:23:16.316654+0000 mon.c (mon.1) 75 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:23:16.930 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:16 vm00 bash[22468]: audit 2026-03-09T18:23:16.317673+0000 mon.c (mon.1) 76 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:23:16.930 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:16 vm00 bash[22468]: audit 2026-03-09T18:23:16.318433+0000 mon.c (mon.1) 77 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:23:16.930 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:16 vm00 bash[22468]: audit 2026-03-09T18:23:16.572014+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:17.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:16 vm08 bash[17774]: cephadm 2026-03-09T18:23:15.451983+0000 mgr.y (mgr.24335) 60 : cephadm [INF] Deploying daemon rgw.foo.vm08.rcuedn on vm08 2026-03-09T18:23:17.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:16 vm08 bash[17774]: cluster 2026-03-09T18:23:15.878874+0000 mon.a (mon.0) 646 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-09T18:23:17.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:16 vm08 bash[17774]: audit 2026-03-09T18:23:16.313049+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:17.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:16 vm08 bash[17774]: audit 2026-03-09T18:23:16.316654+0000 mon.c (mon.1) 75 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:23:17.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:16 vm08 bash[17774]: audit 2026-03-09T18:23:16.317673+0000 mon.c (mon.1) 76 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:23:17.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:16 vm08 bash[17774]: audit 2026-03-09T18:23:16.318433+0000 mon.c (mon.1) 77 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:23:17.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:16 vm08 bash[17774]: audit 2026-03-09T18:23:16.572014+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:18.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:17 vm00 bash[17468]: cluster 2026-03-09T18:23:16.433498+0000 mgr.y (mgr.24335) 61 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 453 KiB data, 55 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-09T18:23:18.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:17 vm00 bash[17468]: audit 2026-03-09T18:23:16.565882+0000 mgr.y (mgr.24335) 62 : audit [DBG] from='client.14619 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "foo", "api_user": "u", "api_password": "p", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:23:18.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:17 vm00 bash[17468]: cephadm 2026-03-09T18:23:16.566717+0000 mgr.y (mgr.24335) 63 : cephadm [INF] Saving service iscsi.foo spec with placement count:1 2026-03-09T18:23:18.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:17 vm00 bash[22468]: cluster 2026-03-09T18:23:16.433498+0000 mgr.y (mgr.24335) 61 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 453 KiB data, 55 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-09T18:23:18.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:17 vm00 bash[22468]: audit 2026-03-09T18:23:16.565882+0000 mgr.y (mgr.24335) 62 : audit [DBG] from='client.14619 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "foo", "api_user": "u", "api_password": "p", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:23:18.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:17 vm00 bash[22468]: cephadm 2026-03-09T18:23:16.566717+0000 mgr.y (mgr.24335) 63 : cephadm [INF] Saving service iscsi.foo spec with placement count:1 2026-03-09T18:23:18.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:17 vm08 bash[17774]: cluster 2026-03-09T18:23:16.433498+0000 mgr.y (mgr.24335) 61 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 453 KiB data, 55 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-09T18:23:18.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:17 vm08 bash[17774]: audit 2026-03-09T18:23:16.565882+0000 mgr.y (mgr.24335) 62 : audit [DBG] from='client.14619 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "foo", "api_user": "u", "api_password": "p", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:23:18.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:17 vm08 bash[17774]: cephadm 2026-03-09T18:23:16.566717+0000 mgr.y (mgr.24335) 63 : cephadm [INF] Saving service iscsi.foo spec with placement count:1 2026-03-09T18:23:18.226 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:23:17 vm08 bash[18535]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:23:17] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:23:19.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:19 vm00 bash[22468]: cluster 2026-03-09T18:23:18.433838+0000 mgr.y (mgr.24335) 64 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 453 KiB data, 55 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T18:23:19.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:19 vm00 bash[22468]: audit 2026-03-09T18:23:18.512148+0000 mon.a (mon.0) 649 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:19.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:19 vm00 bash[22468]: audit 2026-03-09T18:23:19.382239+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:19.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:19 vm00 bash[22468]: audit 2026-03-09T18:23:19.458897+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:19.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:19 vm00 bash[17468]: cluster 2026-03-09T18:23:18.433838+0000 mgr.y (mgr.24335) 64 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 453 KiB data, 55 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T18:23:19.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:19 vm00 bash[17468]: audit 2026-03-09T18:23:18.512148+0000 mon.a (mon.0) 649 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:19.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:19 vm00 bash[17468]: audit 2026-03-09T18:23:19.382239+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:19.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:19 vm00 bash[17468]: audit 2026-03-09T18:23:19.458897+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:19.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:19 vm08 bash[17774]: cluster 2026-03-09T18:23:18.433838+0000 mgr.y (mgr.24335) 64 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 453 KiB data, 55 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T18:23:19.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:19 vm08 bash[17774]: audit 2026-03-09T18:23:18.512148+0000 mon.a (mon.0) 649 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:19.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:19 vm08 bash[17774]: audit 2026-03-09T18:23:19.382239+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:19.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:19 vm08 bash[17774]: audit 2026-03-09T18:23:19.458897+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:20.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:20 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:20.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:20 vm00 bash[17468]: cephadm 2026-03-09T18:23:19.463478+0000 mgr.y (mgr.24335) 65 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T18:23:20.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:20 vm00 bash[17468]: audit 2026-03-09T18:23:20.013341+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:20.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:20 vm00 bash[17468]: audit 2026-03-09T18:23:20.022566+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:20.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:20 vm00 bash[17468]: audit 2026-03-09T18:23:20.027471+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:20.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:20 vm00 bash[17468]: audit 2026-03-09T18:23:20.035059+0000 mon.c (mon.1) 78 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:23:20.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:20 vm00 bash[17468]: audit 2026-03-09T18:23:20.035264+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:23:20.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:20 vm00 bash[17468]: audit 2026-03-09T18:23:20.038083+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T18:23:20.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:20 vm00 bash[17468]: audit 2026-03-09T18:23:20.041290+0000 mon.c (mon.1) 79 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:23:20.635 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:23:20 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:20.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:20 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:20.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:20 vm00 bash[22468]: cephadm 2026-03-09T18:23:19.463478+0000 mgr.y (mgr.24335) 65 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T18:23:20.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:20 vm00 bash[22468]: audit 2026-03-09T18:23:20.013341+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:20.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:20 vm00 bash[22468]: audit 2026-03-09T18:23:20.022566+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:20.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:20 vm00 bash[22468]: audit 2026-03-09T18:23:20.027471+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:20.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:20 vm00 bash[22468]: audit 2026-03-09T18:23:20.035059+0000 mon.c (mon.1) 78 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:23:20.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:20 vm00 bash[22468]: audit 2026-03-09T18:23:20.035264+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:23:20.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:20 vm00 bash[22468]: audit 2026-03-09T18:23:20.038083+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T18:23:20.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:20 vm00 bash[22468]: audit 2026-03-09T18:23:20.041290+0000 mon.c (mon.1) 79 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:23:20.635 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:23:20 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:20.635 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:23:20 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:20.635 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:23:20 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:20.635 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:20 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:20.635 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:23:20 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:20.635 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:23:20 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:20.904 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:20 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:20.904 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:20 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:20.904 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:23:20 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:20.904 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:23:20 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:20.904 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:23:20 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:20.904 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:23:20 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:20.904 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:23:20 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:20.904 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:23:20 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:20.904 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:20 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:23:20.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:20 vm08 bash[17774]: cephadm 2026-03-09T18:23:19.463478+0000 mgr.y (mgr.24335) 65 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T18:23:20.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:20 vm08 bash[17774]: audit 2026-03-09T18:23:20.013341+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:20.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:20 vm08 bash[17774]: audit 2026-03-09T18:23:20.022566+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:20.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:20 vm08 bash[17774]: audit 2026-03-09T18:23:20.027471+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:20.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:20 vm08 bash[17774]: audit 2026-03-09T18:23:20.035059+0000 mon.c (mon.1) 78 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:23:20.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:20 vm08 bash[17774]: audit 2026-03-09T18:23:20.035264+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:23:20.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:20 vm08 bash[17774]: audit 2026-03-09T18:23:20.038083+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T18:23:20.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:20 vm08 bash[17774]: audit 2026-03-09T18:23:20.041290+0000 mon.c (mon.1) 79 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:23:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:21 vm00 bash[17468]: cephadm 2026-03-09T18:23:20.030120+0000 mgr.y (mgr.24335) 66 : cephadm [INF] Checking pool "foo" exists for service iscsi.foo 2026-03-09T18:23:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:21 vm00 bash[17468]: cephadm 2026-03-09T18:23:20.042167+0000 mgr.y (mgr.24335) 67 : cephadm [INF] Deploying daemon iscsi.foo.vm00.ywhulq on vm00 2026-03-09T18:23:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:21 vm00 bash[17468]: cluster 2026-03-09T18:23:20.434486+0000 mgr.y (mgr.24335) 68 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 456 KiB data, 59 MiB used, 160 GiB / 160 GiB avail; 170 KiB/s rd, 6.5 KiB/s wr, 325 op/s 2026-03-09T18:23:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:21 vm00 bash[17468]: audit 2026-03-09T18:23:20.784330+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:21 vm00 bash[17468]: audit 2026-03-09T18:23:20.786238+0000 mon.c (mon.1) 80 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:23:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:21 vm00 bash[17468]: audit 2026-03-09T18:23:20.787259+0000 mon.c (mon.1) 81 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:23:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:21 vm00 bash[17468]: audit 2026-03-09T18:23:20.787785+0000 mon.c (mon.1) 82 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:23:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:21 vm00 bash[17468]: cluster 2026-03-09T18:23:21.057372+0000 mon.a (mon.0) 658 : cluster [DBG] mgrmap e20: y(active, since 52s), standbys: x 2026-03-09T18:23:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:21 vm00 bash[17468]: cluster 2026-03-09T18:23:21.108269+0000 mon.a (mon.0) 659 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-09T18:23:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:21 vm00 bash[17468]: audit 2026-03-09T18:23:21.439916+0000 mon.b (mon.2) 36 : audit [DBG] from='client.? 192.168.123.100:0/1095328269' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T18:23:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:21 vm00 bash[17468]: audit 2026-03-09T18:23:21.624882+0000 mon.c (mon.1) 83 : audit [INF] from='client.? 192.168.123.100:0/3144563707' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1438077138"}]: dispatch 2026-03-09T18:23:22.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:21 vm00 bash[17468]: audit 2026-03-09T18:23:21.625414+0000 mon.a (mon.0) 660 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1438077138"}]: dispatch 2026-03-09T18:23:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:21 vm00 bash[22468]: cephadm 2026-03-09T18:23:20.030120+0000 mgr.y (mgr.24335) 66 : cephadm [INF] Checking pool "foo" exists for service iscsi.foo 2026-03-09T18:23:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:21 vm00 bash[22468]: cephadm 2026-03-09T18:23:20.042167+0000 mgr.y (mgr.24335) 67 : cephadm [INF] Deploying daemon iscsi.foo.vm00.ywhulq on vm00 2026-03-09T18:23:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:21 vm00 bash[22468]: cluster 2026-03-09T18:23:20.434486+0000 mgr.y (mgr.24335) 68 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 456 KiB data, 59 MiB used, 160 GiB / 160 GiB avail; 170 KiB/s rd, 6.5 KiB/s wr, 325 op/s 2026-03-09T18:23:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:21 vm00 bash[22468]: audit 2026-03-09T18:23:20.784330+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:21 vm00 bash[22468]: audit 2026-03-09T18:23:20.786238+0000 mon.c (mon.1) 80 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:23:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:21 vm00 bash[22468]: audit 2026-03-09T18:23:20.787259+0000 mon.c (mon.1) 81 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:23:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:21 vm00 bash[22468]: audit 2026-03-09T18:23:20.787785+0000 mon.c (mon.1) 82 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:23:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:21 vm00 bash[22468]: cluster 2026-03-09T18:23:21.057372+0000 mon.a (mon.0) 658 : cluster [DBG] mgrmap e20: y(active, since 52s), standbys: x 2026-03-09T18:23:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:21 vm00 bash[22468]: cluster 2026-03-09T18:23:21.108269+0000 mon.a (mon.0) 659 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-09T18:23:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:21 vm00 bash[22468]: audit 2026-03-09T18:23:21.439916+0000 mon.b (mon.2) 36 : audit [DBG] from='client.? 192.168.123.100:0/1095328269' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T18:23:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:21 vm00 bash[22468]: audit 2026-03-09T18:23:21.624882+0000 mon.c (mon.1) 83 : audit [INF] from='client.? 192.168.123.100:0/3144563707' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1438077138"}]: dispatch 2026-03-09T18:23:22.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:21 vm00 bash[22468]: audit 2026-03-09T18:23:21.625414+0000 mon.a (mon.0) 660 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1438077138"}]: dispatch 2026-03-09T18:23:22.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:21 vm08 bash[17774]: cephadm 2026-03-09T18:23:20.030120+0000 mgr.y (mgr.24335) 66 : cephadm [INF] Checking pool "foo" exists for service iscsi.foo 2026-03-09T18:23:22.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:21 vm08 bash[17774]: cephadm 2026-03-09T18:23:20.042167+0000 mgr.y (mgr.24335) 67 : cephadm [INF] Deploying daemon iscsi.foo.vm00.ywhulq on vm00 2026-03-09T18:23:22.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:21 vm08 bash[17774]: cluster 2026-03-09T18:23:20.434486+0000 mgr.y (mgr.24335) 68 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 456 KiB data, 59 MiB used, 160 GiB / 160 GiB avail; 170 KiB/s rd, 6.5 KiB/s wr, 325 op/s 2026-03-09T18:23:22.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:21 vm08 bash[17774]: audit 2026-03-09T18:23:20.784330+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:22.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:21 vm08 bash[17774]: audit 2026-03-09T18:23:20.786238+0000 mon.c (mon.1) 80 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:23:22.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:21 vm08 bash[17774]: audit 2026-03-09T18:23:20.787259+0000 mon.c (mon.1) 81 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:23:22.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:21 vm08 bash[17774]: audit 2026-03-09T18:23:20.787785+0000 mon.c (mon.1) 82 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:23:22.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:21 vm08 bash[17774]: cluster 2026-03-09T18:23:21.057372+0000 mon.a (mon.0) 658 : cluster [DBG] mgrmap e20: y(active, since 52s), standbys: x 2026-03-09T18:23:22.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:21 vm08 bash[17774]: cluster 2026-03-09T18:23:21.108269+0000 mon.a (mon.0) 659 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-09T18:23:22.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:21 vm08 bash[17774]: audit 2026-03-09T18:23:21.439916+0000 mon.b (mon.2) 36 : audit [DBG] from='client.? 192.168.123.100:0/1095328269' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T18:23:22.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:21 vm08 bash[17774]: audit 2026-03-09T18:23:21.624882+0000 mon.c (mon.1) 83 : audit [INF] from='client.? 192.168.123.100:0/3144563707' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1438077138"}]: dispatch 2026-03-09T18:23:22.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:21 vm08 bash[17774]: audit 2026-03-09T18:23:21.625414+0000 mon.a (mon.0) 660 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1438077138"}]: dispatch 2026-03-09T18:23:23.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:23 vm08 bash[17774]: audit 2026-03-09T18:23:22.129693+0000 mon.a (mon.0) 661 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1438077138"}]': finished 2026-03-09T18:23:23.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:23 vm08 bash[17774]: cluster 2026-03-09T18:23:22.129788+0000 mon.a (mon.0) 662 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-09T18:23:23.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:23 vm08 bash[17774]: audit 2026-03-09T18:23:22.322012+0000 mon.a (mon.0) 663 : audit [INF] from='client.? 192.168.123.100:0/230789623' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3565704494"}]: dispatch 2026-03-09T18:23:23.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:23 vm08 bash[17774]: cluster 2026-03-09T18:23:22.436741+0000 mgr.y (mgr.24335) 69 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 456 KiB data, 59 MiB used, 160 GiB / 160 GiB avail; 169 KiB/s rd, 5.5 KiB/s wr, 323 op/s 2026-03-09T18:23:23.523 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:23 vm00 bash[17468]: audit 2026-03-09T18:23:22.129693+0000 mon.a (mon.0) 661 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1438077138"}]': finished 2026-03-09T18:23:23.523 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:23 vm00 bash[17468]: cluster 2026-03-09T18:23:22.129788+0000 mon.a (mon.0) 662 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-09T18:23:23.523 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:23 vm00 bash[17468]: audit 2026-03-09T18:23:22.322012+0000 mon.a (mon.0) 663 : audit [INF] from='client.? 192.168.123.100:0/230789623' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3565704494"}]: dispatch 2026-03-09T18:23:23.523 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:23 vm00 bash[17468]: cluster 2026-03-09T18:23:22.436741+0000 mgr.y (mgr.24335) 69 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 456 KiB data, 59 MiB used, 160 GiB / 160 GiB avail; 169 KiB/s rd, 5.5 KiB/s wr, 323 op/s 2026-03-09T18:23:23.523 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:23 vm00 bash[22468]: audit 2026-03-09T18:23:22.129693+0000 mon.a (mon.0) 661 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1438077138"}]': finished 2026-03-09T18:23:23.523 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:23 vm00 bash[22468]: cluster 2026-03-09T18:23:22.129788+0000 mon.a (mon.0) 662 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-09T18:23:23.523 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:23 vm00 bash[22468]: audit 2026-03-09T18:23:22.322012+0000 mon.a (mon.0) 663 : audit [INF] from='client.? 192.168.123.100:0/230789623' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3565704494"}]: dispatch 2026-03-09T18:23:23.523 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:23 vm00 bash[22468]: cluster 2026-03-09T18:23:22.436741+0000 mgr.y (mgr.24335) 69 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 456 KiB data, 59 MiB used, 160 GiB / 160 GiB avail; 169 KiB/s rd, 5.5 KiB/s wr, 323 op/s 2026-03-09T18:23:23.884 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:23 vm00 bash[42815]: level=warn ts=2026-03-09T18:23:23.520Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:23:23.884 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:23 vm00 bash[42815]: level=warn ts=2026-03-09T18:23:23.520Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:23:24.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:24 vm00 bash[17468]: audit 2026-03-09T18:23:23.142049+0000 mon.a (mon.0) 664 : audit [INF] from='client.? 192.168.123.100:0/230789623' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3565704494"}]': finished 2026-03-09T18:23:24.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:24 vm00 bash[17468]: cluster 2026-03-09T18:23:23.142074+0000 mon.a (mon.0) 665 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-09T18:23:24.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:24 vm00 bash[17468]: audit 2026-03-09T18:23:23.329429+0000 mon.a (mon.0) 666 : audit [INF] from='client.? 192.168.123.100:0/3888842329' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2057130512"}]: dispatch 2026-03-09T18:23:24.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:24 vm00 bash[17468]: audit 2026-03-09T18:23:23.524614+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:24.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:24 vm00 bash[17468]: audit 2026-03-09T18:23:23.999825+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:24.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:24 vm00 bash[17468]: audit 2026-03-09T18:23:24.007215+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:24.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:24 vm00 bash[17468]: audit 2026-03-09T18:23:24.011187+0000 mon.c (mon.1) 84 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:23:24.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:24 vm00 bash[17468]: audit 2026-03-09T18:23:24.012880+0000 mon.c (mon.1) 85 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:23:24.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:24 vm00 bash[17468]: audit 2026-03-09T18:23:24.021120+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:24.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:24 vm00 bash[17468]: audit 2026-03-09T18:23:24.025235+0000 mon.c (mon.1) 86 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:23:24.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:24 vm00 bash[17468]: audit 2026-03-09T18:23:24.030815+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:24.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:24 vm00 bash[17468]: audit 2026-03-09T18:23:24.033664+0000 mon.c (mon.1) 87 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:23:24.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:24 vm00 bash[17468]: audit 2026-03-09T18:23:24.034594+0000 mon.c (mon.1) 88 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:23:24.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:24 vm00 bash[17468]: audit 2026-03-09T18:23:24.035310+0000 mon.c (mon.1) 89 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:23:24.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:24 vm00 bash[22468]: audit 2026-03-09T18:23:23.142049+0000 mon.a (mon.0) 664 : audit [INF] from='client.? 192.168.123.100:0/230789623' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3565704494"}]': finished 2026-03-09T18:23:24.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:24 vm00 bash[22468]: cluster 2026-03-09T18:23:23.142074+0000 mon.a (mon.0) 665 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-09T18:23:24.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:24 vm00 bash[22468]: audit 2026-03-09T18:23:23.329429+0000 mon.a (mon.0) 666 : audit [INF] from='client.? 192.168.123.100:0/3888842329' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2057130512"}]: dispatch 2026-03-09T18:23:24.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:24 vm00 bash[22468]: audit 2026-03-09T18:23:23.524614+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:24.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:24 vm00 bash[22468]: audit 2026-03-09T18:23:23.999825+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:24.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:24 vm00 bash[22468]: audit 2026-03-09T18:23:24.007215+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:24.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:24 vm00 bash[22468]: audit 2026-03-09T18:23:24.011187+0000 mon.c (mon.1) 84 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:23:24.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:24 vm00 bash[22468]: audit 2026-03-09T18:23:24.012880+0000 mon.c (mon.1) 85 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:23:24.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:24 vm00 bash[22468]: audit 2026-03-09T18:23:24.021120+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:24.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:24 vm00 bash[22468]: audit 2026-03-09T18:23:24.025235+0000 mon.c (mon.1) 86 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:23:24.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:24 vm00 bash[22468]: audit 2026-03-09T18:23:24.030815+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:24.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:24 vm00 bash[22468]: audit 2026-03-09T18:23:24.033664+0000 mon.c (mon.1) 87 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:23:24.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:24 vm00 bash[22468]: audit 2026-03-09T18:23:24.034594+0000 mon.c (mon.1) 88 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:23:24.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:24 vm00 bash[22468]: audit 2026-03-09T18:23:24.035310+0000 mon.c (mon.1) 89 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:23:24.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:24 vm08 bash[17774]: audit 2026-03-09T18:23:23.142049+0000 mon.a (mon.0) 664 : audit [INF] from='client.? 192.168.123.100:0/230789623' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3565704494"}]': finished 2026-03-09T18:23:24.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:24 vm08 bash[17774]: cluster 2026-03-09T18:23:23.142074+0000 mon.a (mon.0) 665 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-09T18:23:24.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:24 vm08 bash[17774]: audit 2026-03-09T18:23:23.329429+0000 mon.a (mon.0) 666 : audit [INF] from='client.? 192.168.123.100:0/3888842329' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2057130512"}]: dispatch 2026-03-09T18:23:24.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:24 vm08 bash[17774]: audit 2026-03-09T18:23:23.524614+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:24.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:24 vm08 bash[17774]: audit 2026-03-09T18:23:23.999825+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:24.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:24 vm08 bash[17774]: audit 2026-03-09T18:23:24.007215+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:24.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:24 vm08 bash[17774]: audit 2026-03-09T18:23:24.011187+0000 mon.c (mon.1) 84 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:23:24.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:24 vm08 bash[17774]: audit 2026-03-09T18:23:24.012880+0000 mon.c (mon.1) 85 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:23:24.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:24 vm08 bash[17774]: audit 2026-03-09T18:23:24.021120+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:24.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:24 vm08 bash[17774]: audit 2026-03-09T18:23:24.025235+0000 mon.c (mon.1) 86 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:23:24.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:24 vm08 bash[17774]: audit 2026-03-09T18:23:24.030815+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:24.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:24 vm08 bash[17774]: audit 2026-03-09T18:23:24.033664+0000 mon.c (mon.1) 87 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:23:24.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:24 vm08 bash[17774]: audit 2026-03-09T18:23:24.034594+0000 mon.c (mon.1) 88 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:23:24.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:24 vm08 bash[17774]: audit 2026-03-09T18:23:24.035310+0000 mon.c (mon.1) 89 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:23:25.437 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:25 vm00 bash[17468]: audit 2026-03-09T18:23:24.011662+0000 mgr.y (mgr.24335) 70 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:23:25.437 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:25 vm00 bash[17468]: cephadm 2026-03-09T18:23:24.012723+0000 mgr.y (mgr.24335) 71 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.100:5000 to Dashboard 2026-03-09T18:23:25.437 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:25 vm00 bash[17468]: audit 2026-03-09T18:23:24.013137+0000 mgr.y (mgr.24335) 72 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:23:25.437 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:25 vm00 bash[17468]: audit 2026-03-09T18:23:24.025649+0000 mgr.y (mgr.24335) 73 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:23:25.437 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:25 vm00 bash[17468]: cephadm 2026-03-09T18:23:24.037779+0000 mgr.y (mgr.24335) 74 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T18:23:25.437 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:25 vm00 bash[17468]: audit 2026-03-09T18:23:24.175712+0000 mon.a (mon.0) 672 : audit [INF] from='client.? 192.168.123.100:0/3888842329' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2057130512"}]': finished 2026-03-09T18:23:25.437 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:25 vm00 bash[17468]: cluster 2026-03-09T18:23:24.175807+0000 mon.a (mon.0) 673 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-09T18:23:25.437 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:25 vm00 bash[17468]: audit 2026-03-09T18:23:24.408891+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:25.437 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:25 vm00 bash[17468]: cluster 2026-03-09T18:23:24.437354+0000 mgr.y (mgr.24335) 75 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 59 MiB used, 160 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T18:23:25.437 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:25 vm00 bash[17468]: audit 2026-03-09T18:23:24.443658+0000 mon.a (mon.0) 675 : audit [INF] from='client.? 192.168.123.100:0/3741353256' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2374936913"}]: dispatch 2026-03-09T18:23:25.437 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:25 vm00 bash[22468]: audit 2026-03-09T18:23:24.011662+0000 mgr.y (mgr.24335) 70 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:23:25.437 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:25 vm00 bash[22468]: cephadm 2026-03-09T18:23:24.012723+0000 mgr.y (mgr.24335) 71 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.100:5000 to Dashboard 2026-03-09T18:23:25.437 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:25 vm00 bash[22468]: audit 2026-03-09T18:23:24.013137+0000 mgr.y (mgr.24335) 72 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:23:25.437 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:25 vm00 bash[22468]: audit 2026-03-09T18:23:24.025649+0000 mgr.y (mgr.24335) 73 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:23:25.437 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:25 vm00 bash[22468]: cephadm 2026-03-09T18:23:24.037779+0000 mgr.y (mgr.24335) 74 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T18:23:25.437 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:25 vm00 bash[22468]: audit 2026-03-09T18:23:24.175712+0000 mon.a (mon.0) 672 : audit [INF] from='client.? 192.168.123.100:0/3888842329' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2057130512"}]': finished 2026-03-09T18:23:25.437 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:25 vm00 bash[22468]: cluster 2026-03-09T18:23:24.175807+0000 mon.a (mon.0) 673 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-09T18:23:25.437 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:25 vm00 bash[22468]: audit 2026-03-09T18:23:24.408891+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:25.437 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:25 vm00 bash[22468]: cluster 2026-03-09T18:23:24.437354+0000 mgr.y (mgr.24335) 75 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 59 MiB used, 160 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T18:23:25.437 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:25 vm00 bash[22468]: audit 2026-03-09T18:23:24.443658+0000 mon.a (mon.0) 675 : audit [INF] from='client.? 192.168.123.100:0/3741353256' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2374936913"}]: dispatch 2026-03-09T18:23:25.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:25 vm08 bash[17774]: audit 2026-03-09T18:23:24.011662+0000 mgr.y (mgr.24335) 70 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:23:25.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:25 vm08 bash[17774]: cephadm 2026-03-09T18:23:24.012723+0000 mgr.y (mgr.24335) 71 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.100:5000 to Dashboard 2026-03-09T18:23:25.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:25 vm08 bash[17774]: audit 2026-03-09T18:23:24.013137+0000 mgr.y (mgr.24335) 72 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:23:25.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:25 vm08 bash[17774]: audit 2026-03-09T18:23:24.025649+0000 mgr.y (mgr.24335) 73 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:23:25.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:25 vm08 bash[17774]: cephadm 2026-03-09T18:23:24.037779+0000 mgr.y (mgr.24335) 74 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T18:23:25.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:25 vm08 bash[17774]: audit 2026-03-09T18:23:24.175712+0000 mon.a (mon.0) 672 : audit [INF] from='client.? 192.168.123.100:0/3888842329' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2057130512"}]': finished 2026-03-09T18:23:25.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:25 vm08 bash[17774]: cluster 2026-03-09T18:23:24.175807+0000 mon.a (mon.0) 673 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-09T18:23:25.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:25 vm08 bash[17774]: audit 2026-03-09T18:23:24.408891+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:23:25.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:25 vm08 bash[17774]: cluster 2026-03-09T18:23:24.437354+0000 mgr.y (mgr.24335) 75 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 59 MiB used, 160 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T18:23:25.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:25 vm08 bash[17774]: audit 2026-03-09T18:23:24.443658+0000 mon.a (mon.0) 675 : audit [INF] from='client.? 192.168.123.100:0/3741353256' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2374936913"}]: dispatch 2026-03-09T18:23:26.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:23:25 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:23:25] "GET /metrics HTTP/1.1" 200 197398 "" "Prometheus/2.33.4" 2026-03-09T18:23:26.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:26 vm08 bash[17774]: audit 2026-03-09T18:23:25.421513+0000 mon.a (mon.0) 676 : audit [INF] from='client.? 192.168.123.100:0/3741353256' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2374936913"}]': finished 2026-03-09T18:23:26.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:26 vm08 bash[17774]: cluster 2026-03-09T18:23:25.421584+0000 mon.a (mon.0) 677 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-09T18:23:26.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:26 vm08 bash[17774]: audit 2026-03-09T18:23:25.613241+0000 mon.a (mon.0) 678 : audit [INF] from='client.? 192.168.123.100:0/1598492513' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/1230841882"}]: dispatch 2026-03-09T18:23:26.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:26 vm00 bash[17468]: audit 2026-03-09T18:23:25.421513+0000 mon.a (mon.0) 676 : audit [INF] from='client.? 192.168.123.100:0/3741353256' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2374936913"}]': finished 2026-03-09T18:23:26.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:26 vm00 bash[17468]: cluster 2026-03-09T18:23:25.421584+0000 mon.a (mon.0) 677 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-09T18:23:26.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:26 vm00 bash[17468]: audit 2026-03-09T18:23:25.613241+0000 mon.a (mon.0) 678 : audit [INF] from='client.? 192.168.123.100:0/1598492513' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/1230841882"}]: dispatch 2026-03-09T18:23:26.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:26 vm00 bash[22468]: audit 2026-03-09T18:23:25.421513+0000 mon.a (mon.0) 676 : audit [INF] from='client.? 192.168.123.100:0/3741353256' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2374936913"}]': finished 2026-03-09T18:23:26.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:26 vm00 bash[22468]: cluster 2026-03-09T18:23:25.421584+0000 mon.a (mon.0) 677 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-09T18:23:26.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:26 vm00 bash[22468]: audit 2026-03-09T18:23:25.613241+0000 mon.a (mon.0) 678 : audit [INF] from='client.? 192.168.123.100:0/1598492513' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/1230841882"}]: dispatch 2026-03-09T18:23:27.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:27 vm08 bash[17774]: audit 2026-03-09T18:23:26.431097+0000 mon.a (mon.0) 679 : audit [INF] from='client.? 192.168.123.100:0/1598492513' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/1230841882"}]': finished 2026-03-09T18:23:27.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:27 vm08 bash[17774]: cluster 2026-03-09T18:23:26.433425+0000 mon.a (mon.0) 680 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-09T18:23:27.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:27 vm08 bash[17774]: cluster 2026-03-09T18:23:26.437759+0000 mgr.y (mgr.24335) 76 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 60 MiB used, 160 GiB / 160 GiB avail; 19 KiB/s rd, 767 B/s wr, 22 op/s 2026-03-09T18:23:27.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:27 vm08 bash[17774]: audit 2026-03-09T18:23:26.635772+0000 mon.a (mon.0) 681 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2948627942"}]: dispatch 2026-03-09T18:23:27.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:27 vm08 bash[17774]: audit 2026-03-09T18:23:26.636812+0000 mon.b (mon.2) 37 : audit [INF] from='client.? 192.168.123.100:0/3046724865' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2948627942"}]: dispatch 2026-03-09T18:23:27.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:27 vm00 bash[22468]: audit 2026-03-09T18:23:26.431097+0000 mon.a (mon.0) 679 : audit [INF] from='client.? 192.168.123.100:0/1598492513' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/1230841882"}]': finished 2026-03-09T18:23:27.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:27 vm00 bash[22468]: cluster 2026-03-09T18:23:26.433425+0000 mon.a (mon.0) 680 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-09T18:23:27.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:27 vm00 bash[22468]: cluster 2026-03-09T18:23:26.437759+0000 mgr.y (mgr.24335) 76 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 60 MiB used, 160 GiB / 160 GiB avail; 19 KiB/s rd, 767 B/s wr, 22 op/s 2026-03-09T18:23:27.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:27 vm00 bash[22468]: audit 2026-03-09T18:23:26.635772+0000 mon.a (mon.0) 681 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2948627942"}]: dispatch 2026-03-09T18:23:27.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:27 vm00 bash[22468]: audit 2026-03-09T18:23:26.636812+0000 mon.b (mon.2) 37 : audit [INF] from='client.? 192.168.123.100:0/3046724865' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2948627942"}]: dispatch 2026-03-09T18:23:27.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:27 vm00 bash[17468]: audit 2026-03-09T18:23:26.431097+0000 mon.a (mon.0) 679 : audit [INF] from='client.? 192.168.123.100:0/1598492513' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/1230841882"}]': finished 2026-03-09T18:23:27.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:27 vm00 bash[17468]: cluster 2026-03-09T18:23:26.433425+0000 mon.a (mon.0) 680 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-09T18:23:27.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:27 vm00 bash[17468]: cluster 2026-03-09T18:23:26.437759+0000 mgr.y (mgr.24335) 76 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 60 MiB used, 160 GiB / 160 GiB avail; 19 KiB/s rd, 767 B/s wr, 22 op/s 2026-03-09T18:23:27.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:27 vm00 bash[17468]: audit 2026-03-09T18:23:26.635772+0000 mon.a (mon.0) 681 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2948627942"}]: dispatch 2026-03-09T18:23:27.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:27 vm00 bash[17468]: audit 2026-03-09T18:23:26.636812+0000 mon.b (mon.2) 37 : audit [INF] from='client.? 192.168.123.100:0/3046724865' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2948627942"}]: dispatch 2026-03-09T18:23:28.446 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:23:27 vm08 bash[18535]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:23:27] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:23:28.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:28 vm08 bash[17774]: audit 2026-03-09T18:23:27.445343+0000 mon.a (mon.0) 682 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2948627942"}]': finished 2026-03-09T18:23:28.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:28 vm08 bash[17774]: cluster 2026-03-09T18:23:27.445505+0000 mon.a (mon.0) 683 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-09T18:23:28.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:28 vm08 bash[17774]: audit 2026-03-09T18:23:27.635309+0000 mon.c (mon.1) 90 : audit [INF] from='client.? 192.168.123.100:0/2214506326' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/1230841882"}]: dispatch 2026-03-09T18:23:28.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:28 vm08 bash[17774]: audit 2026-03-09T18:23:27.635905+0000 mon.a (mon.0) 684 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/1230841882"}]: dispatch 2026-03-09T18:23:28.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:28 vm00 bash[22468]: audit 2026-03-09T18:23:27.445343+0000 mon.a (mon.0) 682 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2948627942"}]': finished 2026-03-09T18:23:28.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:28 vm00 bash[22468]: cluster 2026-03-09T18:23:27.445505+0000 mon.a (mon.0) 683 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-09T18:23:28.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:28 vm00 bash[22468]: audit 2026-03-09T18:23:27.635309+0000 mon.c (mon.1) 90 : audit [INF] from='client.? 192.168.123.100:0/2214506326' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/1230841882"}]: dispatch 2026-03-09T18:23:28.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:28 vm00 bash[22468]: audit 2026-03-09T18:23:27.635905+0000 mon.a (mon.0) 684 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/1230841882"}]: dispatch 2026-03-09T18:23:28.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:28 vm00 bash[17468]: audit 2026-03-09T18:23:27.445343+0000 mon.a (mon.0) 682 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2948627942"}]': finished 2026-03-09T18:23:28.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:28 vm00 bash[17468]: cluster 2026-03-09T18:23:27.445505+0000 mon.a (mon.0) 683 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-09T18:23:28.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:28 vm00 bash[17468]: audit 2026-03-09T18:23:27.635309+0000 mon.c (mon.1) 90 : audit [INF] from='client.? 192.168.123.100:0/2214506326' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/1230841882"}]: dispatch 2026-03-09T18:23:28.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:28 vm00 bash[17468]: audit 2026-03-09T18:23:27.635905+0000 mon.a (mon.0) 684 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/1230841882"}]: dispatch 2026-03-09T18:23:29.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:29 vm08 bash[17774]: cluster 2026-03-09T18:23:28.438643+0000 mgr.y (mgr.24335) 77 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 60 MiB used, 160 GiB / 160 GiB avail; 18 KiB/s rd, 721 B/s wr, 21 op/s 2026-03-09T18:23:29.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:29 vm08 bash[17774]: audit 2026-03-09T18:23:28.449604+0000 mon.c (mon.1) 91 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]: dispatch 2026-03-09T18:23:29.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:29 vm08 bash[17774]: audit 2026-03-09T18:23:28.449954+0000 mon.c (mon.1) 92 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.6", "id": [1, 5]}]: dispatch 2026-03-09T18:23:29.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:29 vm08 bash[17774]: audit 2026-03-09T18:23:28.450263+0000 mon.c (mon.1) 93 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.12", "id": [1, 5]}]: dispatch 2026-03-09T18:23:29.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:29 vm08 bash[17774]: audit 2026-03-09T18:23:28.450702+0000 mon.c (mon.1) 94 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1b", "id": [1, 2]}]: dispatch 2026-03-09T18:23:29.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:29 vm08 bash[17774]: audit 2026-03-09T18:23:28.455934+0000 mon.a (mon.0) 685 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/1230841882"}]': finished 2026-03-09T18:23:29.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:29 vm08 bash[17774]: cluster 2026-03-09T18:23:28.456027+0000 mon.a (mon.0) 686 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-09T18:23:29.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:29 vm08 bash[17774]: audit 2026-03-09T18:23:28.461254+0000 mon.a (mon.0) 687 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]: dispatch 2026-03-09T18:23:29.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:29 vm08 bash[17774]: audit 2026-03-09T18:23:28.461642+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.6", "id": [1, 5]}]: dispatch 2026-03-09T18:23:29.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:29 vm08 bash[17774]: audit 2026-03-09T18:23:28.462062+0000 mon.a (mon.0) 689 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.12", "id": [1, 5]}]: dispatch 2026-03-09T18:23:29.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:29 vm08 bash[17774]: audit 2026-03-09T18:23:28.462478+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1b", "id": [1, 2]}]: dispatch 2026-03-09T18:23:29.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:29 vm08 bash[17774]: audit 2026-03-09T18:23:28.505351+0000 mon.c (mon.1) 95 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:23:29.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:29 vm08 bash[17774]: audit 2026-03-09T18:23:28.505690+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:23:29.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:29 vm08 bash[17774]: audit 2026-03-09T18:23:28.527120+0000 mon.c (mon.1) 96 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:23:29.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:29 vm08 bash[17774]: audit 2026-03-09T18:23:28.527798+0000 mon.a (mon.0) 692 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:23:29.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:29 vm08 bash[17774]: audit 2026-03-09T18:23:28.686081+0000 mon.c (mon.1) 97 : audit [INF] from='client.? 192.168.123.100:0/889962808' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1939871250"}]: dispatch 2026-03-09T18:23:29.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:29 vm08 bash[17774]: audit 2026-03-09T18:23:28.686587+0000 mon.a (mon.0) 693 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1939871250"}]: dispatch 2026-03-09T18:23:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:29 vm00 bash[22468]: cluster 2026-03-09T18:23:28.438643+0000 mgr.y (mgr.24335) 77 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 60 MiB used, 160 GiB / 160 GiB avail; 18 KiB/s rd, 721 B/s wr, 21 op/s 2026-03-09T18:23:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:29 vm00 bash[22468]: audit 2026-03-09T18:23:28.449604+0000 mon.c (mon.1) 91 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]: dispatch 2026-03-09T18:23:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:29 vm00 bash[22468]: audit 2026-03-09T18:23:28.449954+0000 mon.c (mon.1) 92 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.6", "id": [1, 5]}]: dispatch 2026-03-09T18:23:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:29 vm00 bash[22468]: audit 2026-03-09T18:23:28.450263+0000 mon.c (mon.1) 93 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.12", "id": [1, 5]}]: dispatch 2026-03-09T18:23:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:29 vm00 bash[22468]: audit 2026-03-09T18:23:28.450702+0000 mon.c (mon.1) 94 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1b", "id": [1, 2]}]: dispatch 2026-03-09T18:23:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:29 vm00 bash[22468]: audit 2026-03-09T18:23:28.455934+0000 mon.a (mon.0) 685 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/1230841882"}]': finished 2026-03-09T18:23:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:29 vm00 bash[22468]: cluster 2026-03-09T18:23:28.456027+0000 mon.a (mon.0) 686 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-09T18:23:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:29 vm00 bash[22468]: audit 2026-03-09T18:23:28.461254+0000 mon.a (mon.0) 687 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]: dispatch 2026-03-09T18:23:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:29 vm00 bash[22468]: audit 2026-03-09T18:23:28.461642+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.6", "id": [1, 5]}]: dispatch 2026-03-09T18:23:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:29 vm00 bash[22468]: audit 2026-03-09T18:23:28.462062+0000 mon.a (mon.0) 689 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.12", "id": [1, 5]}]: dispatch 2026-03-09T18:23:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:29 vm00 bash[22468]: audit 2026-03-09T18:23:28.462478+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1b", "id": [1, 2]}]: dispatch 2026-03-09T18:23:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:29 vm00 bash[22468]: audit 2026-03-09T18:23:28.505351+0000 mon.c (mon.1) 95 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:23:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:29 vm00 bash[22468]: audit 2026-03-09T18:23:28.505690+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:23:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:29 vm00 bash[22468]: audit 2026-03-09T18:23:28.527120+0000 mon.c (mon.1) 96 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:23:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:29 vm00 bash[22468]: audit 2026-03-09T18:23:28.527798+0000 mon.a (mon.0) 692 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:23:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:29 vm00 bash[22468]: audit 2026-03-09T18:23:28.686081+0000 mon.c (mon.1) 97 : audit [INF] from='client.? 192.168.123.100:0/889962808' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1939871250"}]: dispatch 2026-03-09T18:23:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:29 vm00 bash[22468]: audit 2026-03-09T18:23:28.686587+0000 mon.a (mon.0) 693 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1939871250"}]: dispatch 2026-03-09T18:23:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:29 vm00 bash[17468]: cluster 2026-03-09T18:23:28.438643+0000 mgr.y (mgr.24335) 77 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 60 MiB used, 160 GiB / 160 GiB avail; 18 KiB/s rd, 721 B/s wr, 21 op/s 2026-03-09T18:23:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:29 vm00 bash[17468]: audit 2026-03-09T18:23:28.449604+0000 mon.c (mon.1) 91 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]: dispatch 2026-03-09T18:23:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:29 vm00 bash[17468]: audit 2026-03-09T18:23:28.449954+0000 mon.c (mon.1) 92 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.6", "id": [1, 5]}]: dispatch 2026-03-09T18:23:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:29 vm00 bash[17468]: audit 2026-03-09T18:23:28.450263+0000 mon.c (mon.1) 93 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.12", "id": [1, 5]}]: dispatch 2026-03-09T18:23:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:29 vm00 bash[17468]: audit 2026-03-09T18:23:28.450702+0000 mon.c (mon.1) 94 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1b", "id": [1, 2]}]: dispatch 2026-03-09T18:23:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:29 vm00 bash[17468]: audit 2026-03-09T18:23:28.455934+0000 mon.a (mon.0) 685 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/1230841882"}]': finished 2026-03-09T18:23:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:29 vm00 bash[17468]: cluster 2026-03-09T18:23:28.456027+0000 mon.a (mon.0) 686 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-09T18:23:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:29 vm00 bash[17468]: audit 2026-03-09T18:23:28.461254+0000 mon.a (mon.0) 687 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]: dispatch 2026-03-09T18:23:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:29 vm00 bash[17468]: audit 2026-03-09T18:23:28.461642+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.6", "id": [1, 5]}]: dispatch 2026-03-09T18:23:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:29 vm00 bash[17468]: audit 2026-03-09T18:23:28.462062+0000 mon.a (mon.0) 689 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.12", "id": [1, 5]}]: dispatch 2026-03-09T18:23:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:29 vm00 bash[17468]: audit 2026-03-09T18:23:28.462478+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1b", "id": [1, 2]}]: dispatch 2026-03-09T18:23:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:29 vm00 bash[17468]: audit 2026-03-09T18:23:28.505351+0000 mon.c (mon.1) 95 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:23:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:29 vm00 bash[17468]: audit 2026-03-09T18:23:28.505690+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:23:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:29 vm00 bash[17468]: audit 2026-03-09T18:23:28.527120+0000 mon.c (mon.1) 96 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:23:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:29 vm00 bash[17468]: audit 2026-03-09T18:23:28.527798+0000 mon.a (mon.0) 692 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:23:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:29 vm00 bash[17468]: audit 2026-03-09T18:23:28.686081+0000 mon.c (mon.1) 97 : audit [INF] from='client.? 192.168.123.100:0/889962808' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1939871250"}]: dispatch 2026-03-09T18:23:29.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:29 vm00 bash[17468]: audit 2026-03-09T18:23:28.686587+0000 mon.a (mon.0) 693 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1939871250"}]: dispatch 2026-03-09T18:23:30.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:30 vm00 bash[22468]: audit 2026-03-09T18:23:29.467439+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]': finished 2026-03-09T18:23:30.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:30 vm00 bash[22468]: audit 2026-03-09T18:23:29.467508+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.6", "id": [1, 5]}]': finished 2026-03-09T18:23:30.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:30 vm00 bash[22468]: audit 2026-03-09T18:23:29.467550+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.12", "id": [1, 5]}]': finished 2026-03-09T18:23:30.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:30 vm00 bash[22468]: audit 2026-03-09T18:23:29.467590+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1b", "id": [1, 2]}]': finished 2026-03-09T18:23:30.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:30 vm00 bash[22468]: audit 2026-03-09T18:23:29.467631+0000 mon.a (mon.0) 698 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1939871250"}]': finished 2026-03-09T18:23:30.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:30 vm00 bash[22468]: cluster 2026-03-09T18:23:29.467668+0000 mon.a (mon.0) 699 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-09T18:23:30.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:30 vm00 bash[22468]: audit 2026-03-09T18:23:29.671589+0000 mon.c (mon.1) 98 : audit [INF] from='client.? 192.168.123.100:0/3361943871' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/1514471438"}]: dispatch 2026-03-09T18:23:30.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:30 vm00 bash[22468]: audit 2026-03-09T18:23:29.671987+0000 mon.a (mon.0) 700 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/1514471438"}]: dispatch 2026-03-09T18:23:30.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:30 vm00 bash[17468]: audit 2026-03-09T18:23:29.467439+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]': finished 2026-03-09T18:23:30.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:30 vm00 bash[17468]: audit 2026-03-09T18:23:29.467508+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.6", "id": [1, 5]}]': finished 2026-03-09T18:23:30.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:30 vm00 bash[17468]: audit 2026-03-09T18:23:29.467550+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.12", "id": [1, 5]}]': finished 2026-03-09T18:23:30.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:30 vm00 bash[17468]: audit 2026-03-09T18:23:29.467590+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1b", "id": [1, 2]}]': finished 2026-03-09T18:23:30.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:30 vm00 bash[17468]: audit 2026-03-09T18:23:29.467631+0000 mon.a (mon.0) 698 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1939871250"}]': finished 2026-03-09T18:23:30.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:30 vm00 bash[17468]: cluster 2026-03-09T18:23:29.467668+0000 mon.a (mon.0) 699 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-09T18:23:30.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:30 vm00 bash[17468]: audit 2026-03-09T18:23:29.671589+0000 mon.c (mon.1) 98 : audit [INF] from='client.? 192.168.123.100:0/3361943871' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/1514471438"}]: dispatch 2026-03-09T18:23:30.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:30 vm00 bash[17468]: audit 2026-03-09T18:23:29.671987+0000 mon.a (mon.0) 700 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/1514471438"}]: dispatch 2026-03-09T18:23:30.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:30 vm08 bash[17774]: audit 2026-03-09T18:23:29.467439+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]': finished 2026-03-09T18:23:30.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:30 vm08 bash[17774]: audit 2026-03-09T18:23:29.467508+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.6", "id": [1, 5]}]': finished 2026-03-09T18:23:30.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:30 vm08 bash[17774]: audit 2026-03-09T18:23:29.467550+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.12", "id": [1, 5]}]': finished 2026-03-09T18:23:30.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:30 vm08 bash[17774]: audit 2026-03-09T18:23:29.467590+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1b", "id": [1, 2]}]': finished 2026-03-09T18:23:30.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:30 vm08 bash[17774]: audit 2026-03-09T18:23:29.467631+0000 mon.a (mon.0) 698 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1939871250"}]': finished 2026-03-09T18:23:30.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:30 vm08 bash[17774]: cluster 2026-03-09T18:23:29.467668+0000 mon.a (mon.0) 699 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-09T18:23:30.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:30 vm08 bash[17774]: audit 2026-03-09T18:23:29.671589+0000 mon.c (mon.1) 98 : audit [INF] from='client.? 192.168.123.100:0/3361943871' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/1514471438"}]: dispatch 2026-03-09T18:23:30.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:30 vm08 bash[17774]: audit 2026-03-09T18:23:29.671987+0000 mon.a (mon.0) 700 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/1514471438"}]: dispatch 2026-03-09T18:23:31.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:31 vm00 bash[17468]: cluster 2026-03-09T18:23:30.439023+0000 mgr.y (mgr.24335) 78 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 61 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:23:31.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:31 vm00 bash[17468]: audit 2026-03-09T18:23:30.488445+0000 mon.a (mon.0) 701 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/1514471438"}]': finished 2026-03-09T18:23:31.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:31 vm00 bash[17468]: cluster 2026-03-09T18:23:30.489367+0000 mon.a (mon.0) 702 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-09T18:23:31.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:31 vm00 bash[17468]: audit 2026-03-09T18:23:30.712357+0000 mon.c (mon.1) 99 : audit [INF] from='client.? 192.168.123.100:0/2095089108' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/1514471438"}]: dispatch 2026-03-09T18:23:31.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:31 vm00 bash[17468]: audit 2026-03-09T18:23:30.712741+0000 mon.a (mon.0) 703 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/1514471438"}]: dispatch 2026-03-09T18:23:31.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:31 vm00 bash[22468]: cluster 2026-03-09T18:23:30.439023+0000 mgr.y (mgr.24335) 78 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 61 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:23:31.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:31 vm00 bash[22468]: audit 2026-03-09T18:23:30.488445+0000 mon.a (mon.0) 701 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/1514471438"}]': finished 2026-03-09T18:23:31.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:31 vm00 bash[22468]: cluster 2026-03-09T18:23:30.489367+0000 mon.a (mon.0) 702 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-09T18:23:31.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:31 vm00 bash[22468]: audit 2026-03-09T18:23:30.712357+0000 mon.c (mon.1) 99 : audit [INF] from='client.? 192.168.123.100:0/2095089108' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/1514471438"}]: dispatch 2026-03-09T18:23:31.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:31 vm00 bash[22468]: audit 2026-03-09T18:23:30.712741+0000 mon.a (mon.0) 703 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/1514471438"}]: dispatch 2026-03-09T18:23:31.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:31 vm08 bash[17774]: cluster 2026-03-09T18:23:30.439023+0000 mgr.y (mgr.24335) 78 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 61 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:23:31.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:31 vm08 bash[17774]: audit 2026-03-09T18:23:30.488445+0000 mon.a (mon.0) 701 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/1514471438"}]': finished 2026-03-09T18:23:31.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:31 vm08 bash[17774]: cluster 2026-03-09T18:23:30.489367+0000 mon.a (mon.0) 702 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-09T18:23:31.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:31 vm08 bash[17774]: audit 2026-03-09T18:23:30.712357+0000 mon.c (mon.1) 99 : audit [INF] from='client.? 192.168.123.100:0/2095089108' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/1514471438"}]: dispatch 2026-03-09T18:23:31.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:31 vm08 bash[17774]: audit 2026-03-09T18:23:30.712741+0000 mon.a (mon.0) 703 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/1514471438"}]: dispatch 2026-03-09T18:23:32.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:32 vm00 bash[17468]: audit 2026-03-09T18:23:31.272934+0000 mgr.y (mgr.24335) 79 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:23:32.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:32 vm00 bash[17468]: audit 2026-03-09T18:23:31.490502+0000 mon.a (mon.0) 704 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/1514471438"}]': finished 2026-03-09T18:23:32.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:32 vm00 bash[17468]: cluster 2026-03-09T18:23:31.490627+0000 mon.a (mon.0) 705 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-09T18:23:32.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:32 vm00 bash[17468]: audit 2026-03-09T18:23:31.711821+0000 mon.c (mon.1) 100 : audit [INF] from='client.? 192.168.123.100:0/2653481141' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2433128758"}]: dispatch 2026-03-09T18:23:32.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:32 vm00 bash[17468]: audit 2026-03-09T18:23:31.712229+0000 mon.a (mon.0) 706 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2433128758"}]: dispatch 2026-03-09T18:23:32.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:32 vm00 bash[22468]: audit 2026-03-09T18:23:31.272934+0000 mgr.y (mgr.24335) 79 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:23:32.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:32 vm00 bash[22468]: audit 2026-03-09T18:23:31.490502+0000 mon.a (mon.0) 704 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/1514471438"}]': finished 2026-03-09T18:23:32.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:32 vm00 bash[22468]: cluster 2026-03-09T18:23:31.490627+0000 mon.a (mon.0) 705 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-09T18:23:32.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:32 vm00 bash[22468]: audit 2026-03-09T18:23:31.711821+0000 mon.c (mon.1) 100 : audit [INF] from='client.? 192.168.123.100:0/2653481141' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2433128758"}]: dispatch 2026-03-09T18:23:32.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:32 vm00 bash[22468]: audit 2026-03-09T18:23:31.712229+0000 mon.a (mon.0) 706 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2433128758"}]: dispatch 2026-03-09T18:23:32.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:32 vm08 bash[17774]: audit 2026-03-09T18:23:31.272934+0000 mgr.y (mgr.24335) 79 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:23:32.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:32 vm08 bash[17774]: audit 2026-03-09T18:23:31.490502+0000 mon.a (mon.0) 704 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/1514471438"}]': finished 2026-03-09T18:23:32.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:32 vm08 bash[17774]: cluster 2026-03-09T18:23:31.490627+0000 mon.a (mon.0) 705 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-09T18:23:32.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:32 vm08 bash[17774]: audit 2026-03-09T18:23:31.711821+0000 mon.c (mon.1) 100 : audit [INF] from='client.? 192.168.123.100:0/2653481141' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2433128758"}]: dispatch 2026-03-09T18:23:32.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:32 vm08 bash[17774]: audit 2026-03-09T18:23:31.712229+0000 mon.a (mon.0) 706 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2433128758"}]: dispatch 2026-03-09T18:23:33.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:33 vm00 bash[17468]: cluster 2026-03-09T18:23:32.439383+0000 mgr.y (mgr.24335) 80 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 61 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:23:33.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:33 vm00 bash[17468]: audit 2026-03-09T18:23:32.563926+0000 mon.a (mon.0) 707 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2433128758"}]': finished 2026-03-09T18:23:33.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:33 vm00 bash[17468]: cluster 2026-03-09T18:23:32.566303+0000 mon.a (mon.0) 708 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-09T18:23:33.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:33 vm00 bash[17468]: audit 2026-03-09T18:23:32.761966+0000 mon.a (mon.0) 709 : audit [INF] from='client.? 192.168.123.100:0/3828957323' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3360653556"}]: dispatch 2026-03-09T18:23:33.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:33 vm00 bash[22468]: cluster 2026-03-09T18:23:32.439383+0000 mgr.y (mgr.24335) 80 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 61 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:23:33.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:33 vm00 bash[22468]: audit 2026-03-09T18:23:32.563926+0000 mon.a (mon.0) 707 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2433128758"}]': finished 2026-03-09T18:23:33.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:33 vm00 bash[22468]: cluster 2026-03-09T18:23:32.566303+0000 mon.a (mon.0) 708 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-09T18:23:33.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:33 vm00 bash[22468]: audit 2026-03-09T18:23:32.761966+0000 mon.a (mon.0) 709 : audit [INF] from='client.? 192.168.123.100:0/3828957323' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3360653556"}]: dispatch 2026-03-09T18:23:33.885 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:33 vm00 bash[42815]: level=error ts=2026-03-09T18:23:33.505Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:23:33.885 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:33 vm00 bash[42815]: level=warn ts=2026-03-09T18:23:33.507Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:23:33.885 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:33 vm00 bash[42815]: level=warn ts=2026-03-09T18:23:33.507Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:23:33.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:33 vm08 bash[17774]: cluster 2026-03-09T18:23:32.439383+0000 mgr.y (mgr.24335) 80 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 61 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:23:33.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:33 vm08 bash[17774]: audit 2026-03-09T18:23:32.563926+0000 mon.a (mon.0) 707 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2433128758"}]': finished 2026-03-09T18:23:33.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:33 vm08 bash[17774]: cluster 2026-03-09T18:23:32.566303+0000 mon.a (mon.0) 708 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-09T18:23:33.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:33 vm08 bash[17774]: audit 2026-03-09T18:23:32.761966+0000 mon.a (mon.0) 709 : audit [INF] from='client.? 192.168.123.100:0/3828957323' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3360653556"}]: dispatch 2026-03-09T18:23:34.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:34 vm00 bash[17468]: audit 2026-03-09T18:23:33.565715+0000 mon.a (mon.0) 710 : audit [INF] from='client.? 192.168.123.100:0/3828957323' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3360653556"}]': finished 2026-03-09T18:23:34.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:34 vm00 bash[17468]: cluster 2026-03-09T18:23:33.568134+0000 mon.a (mon.0) 711 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-09T18:23:34.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:34 vm00 bash[17468]: audit 2026-03-09T18:23:33.761924+0000 mon.a (mon.0) 712 : audit [INF] from='client.? 192.168.123.100:0/1830355733' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/4158221249"}]: dispatch 2026-03-09T18:23:34.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:34 vm00 bash[22468]: audit 2026-03-09T18:23:33.565715+0000 mon.a (mon.0) 710 : audit [INF] from='client.? 192.168.123.100:0/3828957323' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3360653556"}]': finished 2026-03-09T18:23:34.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:34 vm00 bash[22468]: cluster 2026-03-09T18:23:33.568134+0000 mon.a (mon.0) 711 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-09T18:23:34.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:34 vm00 bash[22468]: audit 2026-03-09T18:23:33.761924+0000 mon.a (mon.0) 712 : audit [INF] from='client.? 192.168.123.100:0/1830355733' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/4158221249"}]: dispatch 2026-03-09T18:23:34.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:34 vm08 bash[17774]: audit 2026-03-09T18:23:33.565715+0000 mon.a (mon.0) 710 : audit [INF] from='client.? 192.168.123.100:0/3828957323' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3360653556"}]': finished 2026-03-09T18:23:34.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:34 vm08 bash[17774]: cluster 2026-03-09T18:23:33.568134+0000 mon.a (mon.0) 711 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-09T18:23:34.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:34 vm08 bash[17774]: audit 2026-03-09T18:23:33.761924+0000 mon.a (mon.0) 712 : audit [INF] from='client.? 192.168.123.100:0/1830355733' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/4158221249"}]: dispatch 2026-03-09T18:23:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:35 vm08 bash[17774]: cluster 2026-03-09T18:23:34.439834+0000 mgr.y (mgr.24335) 81 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 62 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:23:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:35 vm08 bash[17774]: audit 2026-03-09T18:23:34.571545+0000 mon.a (mon.0) 713 : audit [INF] from='client.? 192.168.123.100:0/1830355733' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/4158221249"}]': finished 2026-03-09T18:23:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:35 vm08 bash[17774]: cluster 2026-03-09T18:23:34.571630+0000 mon.a (mon.0) 714 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-09T18:23:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:35 vm08 bash[17774]: audit 2026-03-09T18:23:34.765498+0000 mon.a (mon.0) 715 : audit [INF] from='client.? 192.168.123.100:0/2758994318' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/4196289624"}]: dispatch 2026-03-09T18:23:36.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:35 vm00 bash[17468]: cluster 2026-03-09T18:23:34.439834+0000 mgr.y (mgr.24335) 81 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 62 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:23:36.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:35 vm00 bash[17468]: audit 2026-03-09T18:23:34.571545+0000 mon.a (mon.0) 713 : audit [INF] from='client.? 192.168.123.100:0/1830355733' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/4158221249"}]': finished 2026-03-09T18:23:36.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:35 vm00 bash[17468]: cluster 2026-03-09T18:23:34.571630+0000 mon.a (mon.0) 714 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-09T18:23:36.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:35 vm00 bash[17468]: audit 2026-03-09T18:23:34.765498+0000 mon.a (mon.0) 715 : audit [INF] from='client.? 192.168.123.100:0/2758994318' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/4196289624"}]: dispatch 2026-03-09T18:23:36.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:35 vm00 bash[22468]: cluster 2026-03-09T18:23:34.439834+0000 mgr.y (mgr.24335) 81 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 62 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:23:36.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:35 vm00 bash[22468]: audit 2026-03-09T18:23:34.571545+0000 mon.a (mon.0) 713 : audit [INF] from='client.? 192.168.123.100:0/1830355733' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/4158221249"}]': finished 2026-03-09T18:23:36.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:35 vm00 bash[22468]: cluster 2026-03-09T18:23:34.571630+0000 mon.a (mon.0) 714 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-09T18:23:36.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:35 vm00 bash[22468]: audit 2026-03-09T18:23:34.765498+0000 mon.a (mon.0) 715 : audit [INF] from='client.? 192.168.123.100:0/2758994318' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/4196289624"}]: dispatch 2026-03-09T18:23:36.135 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:23:35 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:23:35] "GET /metrics HTTP/1.1" 200 207644 "" "Prometheus/2.33.4" 2026-03-09T18:23:37.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:36 vm00 bash[22468]: audit 2026-03-09T18:23:35.760293+0000 mon.a (mon.0) 716 : audit [INF] from='client.? 192.168.123.100:0/2758994318' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/4196289624"}]': finished 2026-03-09T18:23:37.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:36 vm00 bash[22468]: cluster 2026-03-09T18:23:35.760414+0000 mon.a (mon.0) 717 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-09T18:23:37.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:36 vm00 bash[22468]: audit 2026-03-09T18:23:35.977217+0000 mon.a (mon.0) 718 : audit [INF] from='client.? 192.168.123.100:0/771966498' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/4196289624"}]: dispatch 2026-03-09T18:23:37.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:36 vm00 bash[17468]: audit 2026-03-09T18:23:35.760293+0000 mon.a (mon.0) 716 : audit [INF] from='client.? 192.168.123.100:0/2758994318' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/4196289624"}]': finished 2026-03-09T18:23:37.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:36 vm00 bash[17468]: cluster 2026-03-09T18:23:35.760414+0000 mon.a (mon.0) 717 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-09T18:23:37.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:36 vm00 bash[17468]: audit 2026-03-09T18:23:35.977217+0000 mon.a (mon.0) 718 : audit [INF] from='client.? 192.168.123.100:0/771966498' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/4196289624"}]: dispatch 2026-03-09T18:23:37.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:36 vm08 bash[17774]: audit 2026-03-09T18:23:35.760293+0000 mon.a (mon.0) 716 : audit [INF] from='client.? 192.168.123.100:0/2758994318' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/4196289624"}]': finished 2026-03-09T18:23:37.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:36 vm08 bash[17774]: cluster 2026-03-09T18:23:35.760414+0000 mon.a (mon.0) 717 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-09T18:23:37.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:36 vm08 bash[17774]: audit 2026-03-09T18:23:35.977217+0000 mon.a (mon.0) 718 : audit [INF] from='client.? 192.168.123.100:0/771966498' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/4196289624"}]: dispatch 2026-03-09T18:23:38.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:37 vm00 bash[17468]: cluster 2026-03-09T18:23:36.440177+0000 mgr.y (mgr.24335) 82 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 66 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s; 46 B/s, 0 objects/s recovering 2026-03-09T18:23:38.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:37 vm00 bash[17468]: audit 2026-03-09T18:23:36.774759+0000 mon.a (mon.0) 719 : audit [INF] from='client.? 192.168.123.100:0/771966498' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/4196289624"}]': finished 2026-03-09T18:23:38.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:37 vm00 bash[17468]: cluster 2026-03-09T18:23:36.774782+0000 mon.a (mon.0) 720 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-09T18:23:38.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:37 vm00 bash[17468]: audit 2026-03-09T18:23:36.972207+0000 mon.a (mon.0) 721 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3505528954"}]: dispatch 2026-03-09T18:23:38.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:37 vm00 bash[17468]: audit 2026-03-09T18:23:36.973310+0000 mon.b (mon.2) 38 : audit [INF] from='client.? 192.168.123.100:0/2990922165' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3505528954"}]: dispatch 2026-03-09T18:23:38.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:37 vm00 bash[22468]: cluster 2026-03-09T18:23:36.440177+0000 mgr.y (mgr.24335) 82 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 66 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s; 46 B/s, 0 objects/s recovering 2026-03-09T18:23:38.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:37 vm00 bash[22468]: audit 2026-03-09T18:23:36.774759+0000 mon.a (mon.0) 719 : audit [INF] from='client.? 192.168.123.100:0/771966498' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/4196289624"}]': finished 2026-03-09T18:23:38.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:37 vm00 bash[22468]: cluster 2026-03-09T18:23:36.774782+0000 mon.a (mon.0) 720 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-09T18:23:38.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:37 vm00 bash[22468]: audit 2026-03-09T18:23:36.972207+0000 mon.a (mon.0) 721 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3505528954"}]: dispatch 2026-03-09T18:23:38.135 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:37 vm00 bash[22468]: audit 2026-03-09T18:23:36.973310+0000 mon.b (mon.2) 38 : audit [INF] from='client.? 192.168.123.100:0/2990922165' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3505528954"}]: dispatch 2026-03-09T18:23:38.225 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:23:37 vm08 bash[18535]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:23:37] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:23:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:37 vm08 bash[17774]: cluster 2026-03-09T18:23:36.440177+0000 mgr.y (mgr.24335) 82 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 66 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s; 46 B/s, 0 objects/s recovering 2026-03-09T18:23:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:37 vm08 bash[17774]: audit 2026-03-09T18:23:36.774759+0000 mon.a (mon.0) 719 : audit [INF] from='client.? 192.168.123.100:0/771966498' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/4196289624"}]': finished 2026-03-09T18:23:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:37 vm08 bash[17774]: cluster 2026-03-09T18:23:36.774782+0000 mon.a (mon.0) 720 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-09T18:23:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:37 vm08 bash[17774]: audit 2026-03-09T18:23:36.972207+0000 mon.a (mon.0) 721 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3505528954"}]: dispatch 2026-03-09T18:23:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:37 vm08 bash[17774]: audit 2026-03-09T18:23:36.973310+0000 mon.b (mon.2) 38 : audit [INF] from='client.? 192.168.123.100:0/2990922165' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3505528954"}]: dispatch 2026-03-09T18:23:39.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:38 vm08 bash[17774]: audit 2026-03-09T18:23:37.911990+0000 mon.a (mon.0) 722 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3505528954"}]': finished 2026-03-09T18:23:39.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:38 vm08 bash[17774]: cluster 2026-03-09T18:23:37.912182+0000 mon.a (mon.0) 723 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-09T18:23:39.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:38 vm00 bash[17468]: audit 2026-03-09T18:23:37.911990+0000 mon.a (mon.0) 722 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3505528954"}]': finished 2026-03-09T18:23:39.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:38 vm00 bash[17468]: cluster 2026-03-09T18:23:37.912182+0000 mon.a (mon.0) 723 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-09T18:23:39.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:38 vm00 bash[22468]: audit 2026-03-09T18:23:37.911990+0000 mon.a (mon.0) 722 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3505528954"}]': finished 2026-03-09T18:23:39.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:38 vm00 bash[22468]: cluster 2026-03-09T18:23:37.912182+0000 mon.a (mon.0) 723 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-09T18:23:40.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:39 vm08 bash[17774]: cluster 2026-03-09T18:23:38.440659+0000 mgr.y (mgr.24335) 83 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 66 MiB used, 160 GiB / 160 GiB avail; 46 B/s, 0 objects/s recovering 2026-03-09T18:23:40.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:39 vm00 bash[17468]: cluster 2026-03-09T18:23:38.440659+0000 mgr.y (mgr.24335) 83 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 66 MiB used, 160 GiB / 160 GiB avail; 46 B/s, 0 objects/s recovering 2026-03-09T18:23:40.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:39 vm00 bash[22468]: cluster 2026-03-09T18:23:38.440659+0000 mgr.y (mgr.24335) 83 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 66 MiB used, 160 GiB / 160 GiB avail; 46 B/s, 0 objects/s recovering 2026-03-09T18:23:42.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:41 vm08 bash[17774]: cluster 2026-03-09T18:23:40.441428+0000 mgr.y (mgr.24335) 84 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 872 B/s rd, 0 op/s 2026-03-09T18:23:42.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:41 vm00 bash[17468]: cluster 2026-03-09T18:23:40.441428+0000 mgr.y (mgr.24335) 84 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 872 B/s rd, 0 op/s 2026-03-09T18:23:42.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:41 vm00 bash[22468]: cluster 2026-03-09T18:23:40.441428+0000 mgr.y (mgr.24335) 84 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 872 B/s rd, 0 op/s 2026-03-09T18:23:43.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:42 vm08 bash[17774]: audit 2026-03-09T18:23:41.280253+0000 mgr.y (mgr.24335) 85 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:23:43.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:42 vm00 bash[17468]: audit 2026-03-09T18:23:41.280253+0000 mgr.y (mgr.24335) 85 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:23:43.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:42 vm00 bash[22468]: audit 2026-03-09T18:23:41.280253+0000 mgr.y (mgr.24335) 85 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:23:43.884 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:43 vm00 bash[42815]: level=error ts=2026-03-09T18:23:43.505Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:23:43.884 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:43 vm00 bash[42815]: level=warn ts=2026-03-09T18:23:43.507Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:23:43.884 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:43 vm00 bash[42815]: level=warn ts=2026-03-09T18:23:43.514Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:23:44.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:43 vm08 bash[17774]: cluster 2026-03-09T18:23:42.441789+0000 mgr.y (mgr.24335) 86 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 770 B/s rd, 0 op/s 2026-03-09T18:23:44.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:43 vm00 bash[22468]: cluster 2026-03-09T18:23:42.441789+0000 mgr.y (mgr.24335) 86 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 770 B/s rd, 0 op/s 2026-03-09T18:23:44.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:43 vm00 bash[17468]: cluster 2026-03-09T18:23:42.441789+0000 mgr.y (mgr.24335) 86 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 770 B/s rd, 0 op/s 2026-03-09T18:23:46.134 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:45 vm00 bash[22468]: cluster 2026-03-09T18:23:44.442161+0000 mgr.y (mgr.24335) 87 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:23:46.134 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:23:45 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:23:45] "GET /metrics HTTP/1.1" 200 207669 "" "Prometheus/2.33.4" 2026-03-09T18:23:46.134 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:45 vm00 bash[17468]: cluster 2026-03-09T18:23:44.442161+0000 mgr.y (mgr.24335) 87 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:23:46.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:45 vm08 bash[17774]: cluster 2026-03-09T18:23:44.442161+0000 mgr.y (mgr.24335) 87 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:23:47.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:47 vm00 bash[22468]: cluster 2026-03-09T18:23:46.442632+0000 mgr.y (mgr.24335) 88 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T18:23:47.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:47 vm00 bash[17468]: cluster 2026-03-09T18:23:46.442632+0000 mgr.y (mgr.24335) 88 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T18:23:47.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:47 vm08 bash[17774]: cluster 2026-03-09T18:23:46.442632+0000 mgr.y (mgr.24335) 88 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T18:23:48.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:23:47 vm08 bash[18535]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:23:47] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:23:49.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:49 vm00 bash[22468]: cluster 2026-03-09T18:23:48.442987+0000 mgr.y (mgr.24335) 89 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 972 B/s rd, 0 op/s 2026-03-09T18:23:49.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:49 vm00 bash[17468]: cluster 2026-03-09T18:23:48.442987+0000 mgr.y (mgr.24335) 89 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 972 B/s rd, 0 op/s 2026-03-09T18:23:49.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:49 vm08 bash[17774]: cluster 2026-03-09T18:23:48.442987+0000 mgr.y (mgr.24335) 89 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 972 B/s rd, 0 op/s 2026-03-09T18:23:51.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:51 vm00 bash[22468]: cluster 2026-03-09T18:23:50.443557+0000 mgr.y (mgr.24335) 90 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:23:51.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:51 vm00 bash[17468]: cluster 2026-03-09T18:23:50.443557+0000 mgr.y (mgr.24335) 90 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:23:51.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:51 vm08 bash[17774]: cluster 2026-03-09T18:23:50.443557+0000 mgr.y (mgr.24335) 90 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:23:52.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:52 vm00 bash[22468]: audit 2026-03-09T18:23:51.288377+0000 mgr.y (mgr.24335) 91 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:23:52.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:52 vm00 bash[17468]: audit 2026-03-09T18:23:51.288377+0000 mgr.y (mgr.24335) 91 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:23:52.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:52 vm08 bash[17774]: audit 2026-03-09T18:23:51.288377+0000 mgr.y (mgr.24335) 91 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:23:53.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:53 vm00 bash[22468]: cluster 2026-03-09T18:23:52.443908+0000 mgr.y (mgr.24335) 92 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:23:53.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:53 vm00 bash[17468]: cluster 2026-03-09T18:23:52.443908+0000 mgr.y (mgr.24335) 92 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:23:53.884 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:53 vm00 bash[42815]: level=error ts=2026-03-09T18:23:53.506Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:23:53.884 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:53 vm00 bash[42815]: level=warn ts=2026-03-09T18:23:53.508Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:23:53.884 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:23:53 vm00 bash[42815]: level=warn ts=2026-03-09T18:23:53.508Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:23:53.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:53 vm08 bash[17774]: cluster 2026-03-09T18:23:52.443908+0000 mgr.y (mgr.24335) 92 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:23:55.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:55 vm00 bash[17468]: cluster 2026-03-09T18:23:54.444415+0000 mgr.y (mgr.24335) 93 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:23:55.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:55 vm00 bash[22468]: cluster 2026-03-09T18:23:54.444415+0000 mgr.y (mgr.24335) 93 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:23:55.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:23:55 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:23:55] "GET /metrics HTTP/1.1" 200 207669 "" "Prometheus/2.33.4" 2026-03-09T18:23:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:55 vm08 bash[17774]: cluster 2026-03-09T18:23:54.444415+0000 mgr.y (mgr.24335) 93 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:23:57.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:57 vm00 bash[17468]: cluster 2026-03-09T18:23:56.444895+0000 mgr.y (mgr.24335) 94 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:23:57.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:57 vm00 bash[22468]: cluster 2026-03-09T18:23:56.444895+0000 mgr.y (mgr.24335) 94 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:23:57.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:57 vm08 bash[17774]: cluster 2026-03-09T18:23:56.444895+0000 mgr.y (mgr.24335) 94 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:23:58.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:23:57 vm08 bash[18535]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:23:57] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:23:59.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:23:59 vm00 bash[17468]: cluster 2026-03-09T18:23:58.445337+0000 mgr.y (mgr.24335) 95 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:23:59.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:23:59 vm00 bash[22468]: cluster 2026-03-09T18:23:58.445337+0000 mgr.y (mgr.24335) 95 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:23:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:23:59 vm08 bash[17774]: cluster 2026-03-09T18:23:58.445337+0000 mgr.y (mgr.24335) 95 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:01.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:01 vm00 bash[17468]: cluster 2026-03-09T18:24:00.445943+0000 mgr.y (mgr.24335) 96 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:01.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:01 vm00 bash[22468]: cluster 2026-03-09T18:24:00.445943+0000 mgr.y (mgr.24335) 96 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:01.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:01 vm08 bash[17774]: cluster 2026-03-09T18:24:00.445943+0000 mgr.y (mgr.24335) 96 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:02.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:02 vm00 bash[17468]: audit 2026-03-09T18:24:01.299022+0000 mgr.y (mgr.24335) 97 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:24:02.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:02 vm00 bash[22468]: audit 2026-03-09T18:24:01.299022+0000 mgr.y (mgr.24335) 97 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:24:02.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:02 vm08 bash[17774]: audit 2026-03-09T18:24:01.299022+0000 mgr.y (mgr.24335) 97 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:24:03.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:03 vm00 bash[17468]: cluster 2026-03-09T18:24:02.446241+0000 mgr.y (mgr.24335) 98 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:03.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:03 vm00 bash[22468]: cluster 2026-03-09T18:24:02.446241+0000 mgr.y (mgr.24335) 98 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:03.884 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:24:03 vm00 bash[42815]: level=error ts=2026-03-09T18:24:03.506Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:24:03.884 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:24:03 vm00 bash[42815]: level=warn ts=2026-03-09T18:24:03.508Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:24:03.884 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:24:03 vm00 bash[42815]: level=warn ts=2026-03-09T18:24:03.508Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:24:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:03 vm08 bash[17774]: cluster 2026-03-09T18:24:02.446241+0000 mgr.y (mgr.24335) 98 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:05.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:05 vm00 bash[17468]: cluster 2026-03-09T18:24:04.446665+0000 mgr.y (mgr.24335) 99 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:05.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:24:05 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:24:05] "GET /metrics HTTP/1.1" 200 207621 "" "Prometheus/2.33.4" 2026-03-09T18:24:05.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:05 vm00 bash[22468]: cluster 2026-03-09T18:24:04.446665+0000 mgr.y (mgr.24335) 99 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:05.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:05 vm08 bash[17774]: cluster 2026-03-09T18:24:04.446665+0000 mgr.y (mgr.24335) 99 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:07.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:07 vm00 bash[17468]: cluster 2026-03-09T18:24:06.447151+0000 mgr.y (mgr.24335) 100 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:07.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:07 vm00 bash[22468]: cluster 2026-03-09T18:24:06.447151+0000 mgr.y (mgr.24335) 100 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:07 vm08 bash[17774]: cluster 2026-03-09T18:24:06.447151+0000 mgr.y (mgr.24335) 100 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:08.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:24:07 vm08 bash[18535]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:24:07] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:24:09.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:09 vm00 bash[17468]: cluster 2026-03-09T18:24:08.447452+0000 mgr.y (mgr.24335) 101 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:09.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:09 vm00 bash[22468]: cluster 2026-03-09T18:24:08.447452+0000 mgr.y (mgr.24335) 101 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:09.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:09 vm08 bash[17774]: cluster 2026-03-09T18:24:08.447452+0000 mgr.y (mgr.24335) 101 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:11.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:11 vm00 bash[17468]: cluster 2026-03-09T18:24:10.447944+0000 mgr.y (mgr.24335) 102 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:11.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:11 vm00 bash[22468]: cluster 2026-03-09T18:24:10.447944+0000 mgr.y (mgr.24335) 102 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:11.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:11 vm08 bash[17774]: cluster 2026-03-09T18:24:10.447944+0000 mgr.y (mgr.24335) 102 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:12 vm00 bash[17468]: audit 2026-03-09T18:24:11.307276+0000 mgr.y (mgr.24335) 103 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:24:12.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:12 vm00 bash[22468]: audit 2026-03-09T18:24:11.307276+0000 mgr.y (mgr.24335) 103 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:24:12.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:12 vm08 bash[17774]: audit 2026-03-09T18:24:11.307276+0000 mgr.y (mgr.24335) 103 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:24:13.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:13 vm00 bash[17468]: cluster 2026-03-09T18:24:12.448348+0000 mgr.y (mgr.24335) 104 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:13.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:13 vm00 bash[22468]: cluster 2026-03-09T18:24:12.448348+0000 mgr.y (mgr.24335) 104 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:13.884 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:24:13 vm00 bash[42815]: level=error ts=2026-03-09T18:24:13.507Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:24:13.884 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:24:13 vm00 bash[42815]: level=warn ts=2026-03-09T18:24:13.509Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:24:13.884 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:24:13 vm00 bash[42815]: level=warn ts=2026-03-09T18:24:13.510Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:24:13.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:13 vm08 bash[17774]: cluster 2026-03-09T18:24:12.448348+0000 mgr.y (mgr.24335) 104 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:15.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:15 vm00 bash[17468]: cluster 2026-03-09T18:24:14.448889+0000 mgr.y (mgr.24335) 105 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:15.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:15 vm00 bash[22468]: cluster 2026-03-09T18:24:14.448889+0000 mgr.y (mgr.24335) 105 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:15.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:24:15 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:24:15] "GET /metrics HTTP/1.1" 200 207593 "" "Prometheus/2.33.4" 2026-03-09T18:24:15.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:15 vm08 bash[17774]: cluster 2026-03-09T18:24:14.448889+0000 mgr.y (mgr.24335) 105 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:17.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:17 vm00 bash[17468]: cluster 2026-03-09T18:24:16.449420+0000 mgr.y (mgr.24335) 106 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:17.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:17 vm00 bash[22468]: cluster 2026-03-09T18:24:16.449420+0000 mgr.y (mgr.24335) 106 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:17.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:17 vm08 bash[17774]: cluster 2026-03-09T18:24:16.449420+0000 mgr.y (mgr.24335) 106 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:18.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:24:17 vm08 bash[18535]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:24:17] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:24:19.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:19 vm00 bash[17468]: cluster 2026-03-09T18:24:18.449730+0000 mgr.y (mgr.24335) 107 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:19.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:19 vm00 bash[22468]: cluster 2026-03-09T18:24:18.449730+0000 mgr.y (mgr.24335) 107 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:19.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:19 vm08 bash[17774]: cluster 2026-03-09T18:24:18.449730+0000 mgr.y (mgr.24335) 107 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:21.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:21 vm00 bash[17468]: cluster 2026-03-09T18:24:20.450255+0000 mgr.y (mgr.24335) 108 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:21.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:21 vm00 bash[22468]: cluster 2026-03-09T18:24:20.450255+0000 mgr.y (mgr.24335) 108 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:21.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:21 vm08 bash[17774]: cluster 2026-03-09T18:24:20.450255+0000 mgr.y (mgr.24335) 108 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:22.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:22 vm08 bash[17774]: audit 2026-03-09T18:24:21.317870+0000 mgr.y (mgr.24335) 109 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:24:22.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:22 vm00 bash[17468]: audit 2026-03-09T18:24:21.317870+0000 mgr.y (mgr.24335) 109 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:24:22.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:22 vm00 bash[22468]: audit 2026-03-09T18:24:21.317870+0000 mgr.y (mgr.24335) 109 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:24:23.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:23 vm00 bash[17468]: cluster 2026-03-09T18:24:22.450636+0000 mgr.y (mgr.24335) 110 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:23.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:23 vm00 bash[22468]: cluster 2026-03-09T18:24:22.450636+0000 mgr.y (mgr.24335) 110 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:23.884 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:24:23 vm00 bash[42815]: level=error ts=2026-03-09T18:24:23.508Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:24:23.884 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:24:23 vm00 bash[42815]: level=warn ts=2026-03-09T18:24:23.510Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:24:23.884 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:24:23 vm00 bash[42815]: level=warn ts=2026-03-09T18:24:23.510Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:24:23.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:23 vm08 bash[17774]: cluster 2026-03-09T18:24:22.450636+0000 mgr.y (mgr.24335) 110 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:24.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:24 vm08 bash[17774]: audit 2026-03-09T18:24:24.412456+0000 mon.c (mon.1) 101 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:24:24.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:24 vm08 bash[17774]: audit 2026-03-09T18:24:24.413459+0000 mon.c (mon.1) 102 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:24:24.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:24 vm08 bash[17774]: audit 2026-03-09T18:24:24.414330+0000 mon.c (mon.1) 103 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:24:24.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:24 vm00 bash[17468]: audit 2026-03-09T18:24:24.412456+0000 mon.c (mon.1) 101 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:24:24.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:24 vm00 bash[17468]: audit 2026-03-09T18:24:24.413459+0000 mon.c (mon.1) 102 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:24:24.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:24 vm00 bash[17468]: audit 2026-03-09T18:24:24.414330+0000 mon.c (mon.1) 103 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:24:24.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:24 vm00 bash[22468]: audit 2026-03-09T18:24:24.412456+0000 mon.c (mon.1) 101 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:24:24.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:24 vm00 bash[22468]: audit 2026-03-09T18:24:24.413459+0000 mon.c (mon.1) 102 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:24:24.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:24 vm00 bash[22468]: audit 2026-03-09T18:24:24.414330+0000 mon.c (mon.1) 103 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:24:25.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:25 vm00 bash[22468]: cluster 2026-03-09T18:24:24.451170+0000 mgr.y (mgr.24335) 111 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:25.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:25 vm00 bash[22468]: audit 2026-03-09T18:24:24.586049+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:24:25.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:25 vm00 bash[17468]: cluster 2026-03-09T18:24:24.451170+0000 mgr.y (mgr.24335) 111 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:25.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:25 vm00 bash[17468]: audit 2026-03-09T18:24:24.586049+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:24:25.884 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:24:25 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:24:25] "GET /metrics HTTP/1.1" 200 207593 "" "Prometheus/2.33.4" 2026-03-09T18:24:25.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:25 vm08 bash[17774]: cluster 2026-03-09T18:24:24.451170+0000 mgr.y (mgr.24335) 111 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:25.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:25 vm08 bash[17774]: audit 2026-03-09T18:24:24.586049+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:24:27.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:27 vm00 bash[22468]: cluster 2026-03-09T18:24:26.451583+0000 mgr.y (mgr.24335) 112 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:27.384 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:27 vm00 bash[17468]: cluster 2026-03-09T18:24:26.451583+0000 mgr.y (mgr.24335) 112 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:27.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:27 vm08 bash[17774]: cluster 2026-03-09T18:24:26.451583+0000 mgr.y (mgr.24335) 112 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:28.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:24:27 vm08 bash[18535]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:24:27] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:24:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:29 vm00 bash[22468]: cluster 2026-03-09T18:24:28.451937+0000 mgr.y (mgr.24335) 113 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:29 vm00 bash[22468]: audit 2026-03-09T18:24:28.520964+0000 mon.c (mon.1) 104 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:24:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:29 vm00 bash[22468]: audit 2026-03-09T18:24:28.521455+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:24:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:29 vm00 bash[22468]: audit 2026-03-09T18:24:28.534502+0000 mon.c (mon.1) 105 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:24:29.884 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:29 vm00 bash[22468]: audit 2026-03-09T18:24:28.534815+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:24:29.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:29 vm00 bash[17468]: cluster 2026-03-09T18:24:28.451937+0000 mgr.y (mgr.24335) 113 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:29.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:29 vm00 bash[17468]: audit 2026-03-09T18:24:28.520964+0000 mon.c (mon.1) 104 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:24:29.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:29 vm00 bash[17468]: audit 2026-03-09T18:24:28.521455+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:24:29.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:29 vm00 bash[17468]: audit 2026-03-09T18:24:28.534502+0000 mon.c (mon.1) 105 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:24:29.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:29 vm00 bash[17468]: audit 2026-03-09T18:24:28.534815+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:24:29.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:29 vm08 bash[17774]: cluster 2026-03-09T18:24:28.451937+0000 mgr.y (mgr.24335) 113 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:29.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:29 vm08 bash[17774]: audit 2026-03-09T18:24:28.520964+0000 mon.c (mon.1) 104 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:24:29.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:29 vm08 bash[17774]: audit 2026-03-09T18:24:28.521455+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:24:29.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:29 vm08 bash[17774]: audit 2026-03-09T18:24:28.534502+0000 mon.c (mon.1) 105 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:24:29.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:29 vm08 bash[17774]: audit 2026-03-09T18:24:28.534815+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:24:31.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:31 vm00 bash[22468]: cluster 2026-03-09T18:24:30.452649+0000 mgr.y (mgr.24335) 114 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:31.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:31 vm00 bash[17468]: cluster 2026-03-09T18:24:30.452649+0000 mgr.y (mgr.24335) 114 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:31.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:31 vm08 bash[17774]: cluster 2026-03-09T18:24:30.452649+0000 mgr.y (mgr.24335) 114 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:32.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:32 vm00 bash[22468]: audit 2026-03-09T18:24:31.324096+0000 mgr.y (mgr.24335) 115 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:24:32.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:32 vm00 bash[17468]: audit 2026-03-09T18:24:31.324096+0000 mgr.y (mgr.24335) 115 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:24:32.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:32 vm08 bash[17774]: audit 2026-03-09T18:24:31.324096+0000 mgr.y (mgr.24335) 115 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:24:33.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:33 vm00 bash[22468]: cluster 2026-03-09T18:24:32.453010+0000 mgr.y (mgr.24335) 116 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:33.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:33 vm00 bash[17468]: cluster 2026-03-09T18:24:32.453010+0000 mgr.y (mgr.24335) 116 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:33.883 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:24:33 vm00 bash[42815]: level=error ts=2026-03-09T18:24:33.508Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:24:33.883 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:24:33 vm00 bash[42815]: level=warn ts=2026-03-09T18:24:33.510Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:24:33.883 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:24:33 vm00 bash[42815]: level=warn ts=2026-03-09T18:24:33.510Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:24:33.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:33 vm08 bash[17774]: cluster 2026-03-09T18:24:32.453010+0000 mgr.y (mgr.24335) 116 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:35.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:35 vm00 bash[22468]: cluster 2026-03-09T18:24:34.453461+0000 mgr.y (mgr.24335) 117 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:35.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:35 vm00 bash[17468]: cluster 2026-03-09T18:24:34.453461+0000 mgr.y (mgr.24335) 117 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:35.883 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:24:35 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:24:35] "GET /metrics HTTP/1.1" 200 207631 "" "Prometheus/2.33.4" 2026-03-09T18:24:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:35 vm08 bash[17774]: cluster 2026-03-09T18:24:34.453461+0000 mgr.y (mgr.24335) 117 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:37.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:37 vm00 bash[22468]: cluster 2026-03-09T18:24:36.453775+0000 mgr.y (mgr.24335) 118 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:37.383 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:37 vm00 bash[17468]: cluster 2026-03-09T18:24:36.453775+0000 mgr.y (mgr.24335) 118 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:37.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:37 vm08 bash[17774]: cluster 2026-03-09T18:24:36.453775+0000 mgr.y (mgr.24335) 118 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:38.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:24:37 vm08 bash[18535]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:24:37] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:24:39.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:39 vm00 bash[22468]: cluster 2026-03-09T18:24:38.454066+0000 mgr.y (mgr.24335) 119 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:39.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:39 vm00 bash[17468]: cluster 2026-03-09T18:24:38.454066+0000 mgr.y (mgr.24335) 119 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:39.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:39 vm08 bash[17774]: cluster 2026-03-09T18:24:38.454066+0000 mgr.y (mgr.24335) 119 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:41.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:41 vm00 bash[17468]: cluster 2026-03-09T18:24:40.454741+0000 mgr.y (mgr.24335) 120 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:41.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:41 vm00 bash[22468]: cluster 2026-03-09T18:24:40.454741+0000 mgr.y (mgr.24335) 120 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:41.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:41 vm08 bash[17774]: cluster 2026-03-09T18:24:40.454741+0000 mgr.y (mgr.24335) 120 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:42.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:42 vm00 bash[17468]: audit 2026-03-09T18:24:41.334819+0000 mgr.y (mgr.24335) 121 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:24:42.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:42 vm00 bash[22468]: audit 2026-03-09T18:24:41.334819+0000 mgr.y (mgr.24335) 121 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:24:42.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:42 vm08 bash[17774]: audit 2026-03-09T18:24:41.334819+0000 mgr.y (mgr.24335) 121 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:24:43.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:43 vm00 bash[17468]: cluster 2026-03-09T18:24:42.455119+0000 mgr.y (mgr.24335) 122 : cluster [DBG] pgmap v102: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:43.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:43 vm00 bash[22468]: cluster 2026-03-09T18:24:42.455119+0000 mgr.y (mgr.24335) 122 : cluster [DBG] pgmap v102: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:43.883 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:24:43 vm00 bash[42815]: level=error ts=2026-03-09T18:24:43.509Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:24:43.883 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:24:43 vm00 bash[42815]: level=warn ts=2026-03-09T18:24:43.511Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:24:43.883 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:24:43 vm00 bash[42815]: level=warn ts=2026-03-09T18:24:43.511Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:24:43.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:43 vm08 bash[17774]: cluster 2026-03-09T18:24:42.455119+0000 mgr.y (mgr.24335) 122 : cluster [DBG] pgmap v102: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:45.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:45 vm00 bash[22468]: cluster 2026-03-09T18:24:44.455743+0000 mgr.y (mgr.24335) 123 : cluster [DBG] pgmap v103: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:45.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:45 vm00 bash[17468]: cluster 2026-03-09T18:24:44.455743+0000 mgr.y (mgr.24335) 123 : cluster [DBG] pgmap v103: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:45.883 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:24:45 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:24:45] "GET /metrics HTTP/1.1" 200 207666 "" "Prometheus/2.33.4" 2026-03-09T18:24:45.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:45 vm08 bash[17774]: cluster 2026-03-09T18:24:44.455743+0000 mgr.y (mgr.24335) 123 : cluster [DBG] pgmap v103: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:47.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:47 vm08 bash[17774]: cluster 2026-03-09T18:24:46.456053+0000 mgr.y (mgr.24335) 124 : cluster [DBG] pgmap v104: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:47.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:47 vm00 bash[22468]: cluster 2026-03-09T18:24:46.456053+0000 mgr.y (mgr.24335) 124 : cluster [DBG] pgmap v104: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:47.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:47 vm00 bash[17468]: cluster 2026-03-09T18:24:46.456053+0000 mgr.y (mgr.24335) 124 : cluster [DBG] pgmap v104: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:48.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:24:47 vm08 bash[18535]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:24:47] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:24:49.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:49 vm00 bash[22468]: cluster 2026-03-09T18:24:48.456410+0000 mgr.y (mgr.24335) 125 : cluster [DBG] pgmap v105: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:49.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:49 vm00 bash[17468]: cluster 2026-03-09T18:24:48.456410+0000 mgr.y (mgr.24335) 125 : cluster [DBG] pgmap v105: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:49.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:49 vm08 bash[17774]: cluster 2026-03-09T18:24:48.456410+0000 mgr.y (mgr.24335) 125 : cluster [DBG] pgmap v105: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:51.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:51 vm00 bash[22468]: cluster 2026-03-09T18:24:50.457133+0000 mgr.y (mgr.24335) 126 : cluster [DBG] pgmap v106: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:51.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:51 vm00 bash[17468]: cluster 2026-03-09T18:24:50.457133+0000 mgr.y (mgr.24335) 126 : cluster [DBG] pgmap v106: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:51.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:51 vm08 bash[17774]: cluster 2026-03-09T18:24:50.457133+0000 mgr.y (mgr.24335) 126 : cluster [DBG] pgmap v106: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:52.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:52 vm00 bash[22468]: audit 2026-03-09T18:24:51.341562+0000 mgr.y (mgr.24335) 127 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:24:52.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:52 vm00 bash[17468]: audit 2026-03-09T18:24:51.341562+0000 mgr.y (mgr.24335) 127 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:24:52.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:52 vm08 bash[17774]: audit 2026-03-09T18:24:51.341562+0000 mgr.y (mgr.24335) 127 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:24:53.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:53 vm00 bash[22468]: cluster 2026-03-09T18:24:52.457419+0000 mgr.y (mgr.24335) 128 : cluster [DBG] pgmap v107: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:53.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:53 vm00 bash[17468]: cluster 2026-03-09T18:24:52.457419+0000 mgr.y (mgr.24335) 128 : cluster [DBG] pgmap v107: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:53.882 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:24:53 vm00 bash[42815]: level=error ts=2026-03-09T18:24:53.509Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:24:53.883 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:24:53 vm00 bash[42815]: level=warn ts=2026-03-09T18:24:53.511Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:24:53.883 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:24:53 vm00 bash[42815]: level=warn ts=2026-03-09T18:24:53.512Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:24:53.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:53 vm08 bash[17774]: cluster 2026-03-09T18:24:52.457419+0000 mgr.y (mgr.24335) 128 : cluster [DBG] pgmap v107: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:55.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:55 vm00 bash[22468]: cluster 2026-03-09T18:24:54.457921+0000 mgr.y (mgr.24335) 129 : cluster [DBG] pgmap v108: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:55.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:55 vm00 bash[17468]: cluster 2026-03-09T18:24:54.457921+0000 mgr.y (mgr.24335) 129 : cluster [DBG] pgmap v108: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:55.882 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:24:55 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:24:55] "GET /metrics HTTP/1.1" 200 207666 "" "Prometheus/2.33.4" 2026-03-09T18:24:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:55 vm08 bash[17774]: cluster 2026-03-09T18:24:54.457921+0000 mgr.y (mgr.24335) 129 : cluster [DBG] pgmap v108: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:24:57.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:57 vm08 bash[17774]: cluster 2026-03-09T18:24:56.458214+0000 mgr.y (mgr.24335) 130 : cluster [DBG] pgmap v109: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:57.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:57 vm00 bash[22468]: cluster 2026-03-09T18:24:56.458214+0000 mgr.y (mgr.24335) 130 : cluster [DBG] pgmap v109: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:57.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:57 vm00 bash[17468]: cluster 2026-03-09T18:24:56.458214+0000 mgr.y (mgr.24335) 130 : cluster [DBG] pgmap v109: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:58.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:24:57 vm08 bash[18535]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:24:57] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:24:59.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:24:59 vm00 bash[22468]: cluster 2026-03-09T18:24:58.458480+0000 mgr.y (mgr.24335) 131 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:59.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:24:59 vm00 bash[17468]: cluster 2026-03-09T18:24:58.458480+0000 mgr.y (mgr.24335) 131 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:24:59.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:24:59 vm08 bash[17774]: cluster 2026-03-09T18:24:58.458480+0000 mgr.y (mgr.24335) 131 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:01.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:01 vm08 bash[17774]: cluster 2026-03-09T18:25:00.458997+0000 mgr.y (mgr.24335) 132 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:02.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:01 vm00 bash[22468]: cluster 2026-03-09T18:25:00.458997+0000 mgr.y (mgr.24335) 132 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:02.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:01 vm00 bash[17468]: cluster 2026-03-09T18:25:00.458997+0000 mgr.y (mgr.24335) 132 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:02.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:02 vm08 bash[17774]: audit 2026-03-09T18:25:01.345225+0000 mgr.y (mgr.24335) 133 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:25:03.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:02 vm00 bash[22468]: audit 2026-03-09T18:25:01.345225+0000 mgr.y (mgr.24335) 133 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:25:03.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:02 vm00 bash[17468]: audit 2026-03-09T18:25:01.345225+0000 mgr.y (mgr.24335) 133 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:25:03.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:03 vm00 bash[22468]: cluster 2026-03-09T18:25:02.459330+0000 mgr.y (mgr.24335) 134 : cluster [DBG] pgmap v112: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:03.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:03 vm00 bash[17468]: cluster 2026-03-09T18:25:02.459330+0000 mgr.y (mgr.24335) 134 : cluster [DBG] pgmap v112: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:03.882 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:25:03 vm00 bash[42815]: level=error ts=2026-03-09T18:25:03.509Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:25:03.882 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:25:03 vm00 bash[42815]: level=warn ts=2026-03-09T18:25:03.512Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:25:03.882 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:25:03 vm00 bash[42815]: level=warn ts=2026-03-09T18:25:03.512Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:25:03.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:03 vm08 bash[17774]: cluster 2026-03-09T18:25:02.459330+0000 mgr.y (mgr.24335) 134 : cluster [DBG] pgmap v112: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:05.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:05 vm08 bash[17774]: cluster 2026-03-09T18:25:04.459900+0000 mgr.y (mgr.24335) 135 : cluster [DBG] pgmap v113: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:06.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:05 vm00 bash[22468]: cluster 2026-03-09T18:25:04.459900+0000 mgr.y (mgr.24335) 135 : cluster [DBG] pgmap v113: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:06.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:05 vm00 bash[17468]: cluster 2026-03-09T18:25:04.459900+0000 mgr.y (mgr.24335) 135 : cluster [DBG] pgmap v113: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:06.132 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:25:05 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:25:05] "GET /metrics HTTP/1.1" 200 207665 "" "Prometheus/2.33.4" 2026-03-09T18:25:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:07 vm08 bash[17774]: cluster 2026-03-09T18:25:06.460196+0000 mgr.y (mgr.24335) 136 : cluster [DBG] pgmap v114: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:07.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:07 vm00 bash[22468]: cluster 2026-03-09T18:25:06.460196+0000 mgr.y (mgr.24335) 136 : cluster [DBG] pgmap v114: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:07.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:07 vm00 bash[17468]: cluster 2026-03-09T18:25:06.460196+0000 mgr.y (mgr.24335) 136 : cluster [DBG] pgmap v114: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:08.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:25:08 vm08 bash[18535]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:25:07] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:25:09.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:09 vm00 bash[22468]: cluster 2026-03-09T18:25:08.460654+0000 mgr.y (mgr.24335) 137 : cluster [DBG] pgmap v115: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:09 vm00 bash[17468]: cluster 2026-03-09T18:25:08.460654+0000 mgr.y (mgr.24335) 137 : cluster [DBG] pgmap v115: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:09.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:09 vm08 bash[17774]: cluster 2026-03-09T18:25:08.460654+0000 mgr.y (mgr.24335) 137 : cluster [DBG] pgmap v115: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:11.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:11 vm08 bash[17774]: cluster 2026-03-09T18:25:10.461353+0000 mgr.y (mgr.24335) 138 : cluster [DBG] pgmap v116: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:12.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:11 vm00 bash[22468]: cluster 2026-03-09T18:25:10.461353+0000 mgr.y (mgr.24335) 138 : cluster [DBG] pgmap v116: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:12.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:11 vm00 bash[17468]: cluster 2026-03-09T18:25:10.461353+0000 mgr.y (mgr.24335) 138 : cluster [DBG] pgmap v116: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:12.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:12 vm08 bash[17774]: audit 2026-03-09T18:25:11.351968+0000 mgr.y (mgr.24335) 139 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:25:13.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:12 vm00 bash[22468]: audit 2026-03-09T18:25:11.351968+0000 mgr.y (mgr.24335) 139 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:25:13.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:12 vm00 bash[17468]: audit 2026-03-09T18:25:11.351968+0000 mgr.y (mgr.24335) 139 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:25:13.791 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:25:13 vm00 bash[42815]: level=error ts=2026-03-09T18:25:13.510Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:25:13.791 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:25:13 vm00 bash[42815]: level=warn ts=2026-03-09T18:25:13.512Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:25:13.791 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:25:13 vm00 bash[42815]: level=warn ts=2026-03-09T18:25:13.522Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:25:14.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:13 vm00 bash[22468]: cluster 2026-03-09T18:25:12.461657+0000 mgr.y (mgr.24335) 140 : cluster [DBG] pgmap v117: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:14.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:13 vm00 bash[17468]: cluster 2026-03-09T18:25:12.461657+0000 mgr.y (mgr.24335) 140 : cluster [DBG] pgmap v117: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:14.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:13 vm08 bash[17774]: cluster 2026-03-09T18:25:12.461657+0000 mgr.y (mgr.24335) 140 : cluster [DBG] pgmap v117: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:16.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:15 vm00 bash[22468]: cluster 2026-03-09T18:25:14.462218+0000 mgr.y (mgr.24335) 141 : cluster [DBG] pgmap v118: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:16.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:15 vm00 bash[17468]: cluster 2026-03-09T18:25:14.462218+0000 mgr.y (mgr.24335) 141 : cluster [DBG] pgmap v118: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:16.132 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:25:15 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:25:15] "GET /metrics HTTP/1.1" 200 207673 "" "Prometheus/2.33.4" 2026-03-09T18:25:16.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:15 vm08 bash[17774]: cluster 2026-03-09T18:25:14.462218+0000 mgr.y (mgr.24335) 141 : cluster [DBG] pgmap v118: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:17.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:17 vm00 bash[22468]: cluster 2026-03-09T18:25:16.462558+0000 mgr.y (mgr.24335) 142 : cluster [DBG] pgmap v119: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:17.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:17 vm00 bash[17468]: cluster 2026-03-09T18:25:16.462558+0000 mgr.y (mgr.24335) 142 : cluster [DBG] pgmap v119: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:17.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:17 vm08 bash[17774]: cluster 2026-03-09T18:25:16.462558+0000 mgr.y (mgr.24335) 142 : cluster [DBG] pgmap v119: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:18.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:25:17 vm08 bash[18535]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:25:17] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:25:19.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:19 vm00 bash[22468]: cluster 2026-03-09T18:25:18.462857+0000 mgr.y (mgr.24335) 143 : cluster [DBG] pgmap v120: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:19.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:19 vm00 bash[17468]: cluster 2026-03-09T18:25:18.462857+0000 mgr.y (mgr.24335) 143 : cluster [DBG] pgmap v120: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:19.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:19 vm08 bash[17774]: cluster 2026-03-09T18:25:18.462857+0000 mgr.y (mgr.24335) 143 : cluster [DBG] pgmap v120: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:21.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:21 vm00 bash[22468]: cluster 2026-03-09T18:25:20.463516+0000 mgr.y (mgr.24335) 144 : cluster [DBG] pgmap v121: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:21.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:21 vm00 bash[17468]: cluster 2026-03-09T18:25:20.463516+0000 mgr.y (mgr.24335) 144 : cluster [DBG] pgmap v121: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:21.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:21 vm08 bash[17774]: cluster 2026-03-09T18:25:20.463516+0000 mgr.y (mgr.24335) 144 : cluster [DBG] pgmap v121: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:22.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:22 vm00 bash[22468]: audit 2026-03-09T18:25:21.361609+0000 mgr.y (mgr.24335) 145 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:25:22.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:22 vm00 bash[17468]: audit 2026-03-09T18:25:21.361609+0000 mgr.y (mgr.24335) 145 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:25:22.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:22 vm08 bash[17774]: audit 2026-03-09T18:25:21.361609+0000 mgr.y (mgr.24335) 145 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:25:23.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:23 vm00 bash[22468]: cluster 2026-03-09T18:25:22.463828+0000 mgr.y (mgr.24335) 146 : cluster [DBG] pgmap v122: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:23.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:23 vm00 bash[17468]: cluster 2026-03-09T18:25:22.463828+0000 mgr.y (mgr.24335) 146 : cluster [DBG] pgmap v122: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:23.882 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:25:23 vm00 bash[42815]: level=error ts=2026-03-09T18:25:23.511Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:25:23.882 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:25:23 vm00 bash[42815]: level=warn ts=2026-03-09T18:25:23.513Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:25:23.882 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:25:23 vm00 bash[42815]: level=warn ts=2026-03-09T18:25:23.513Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:25:23.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:23 vm08 bash[17774]: cluster 2026-03-09T18:25:22.463828+0000 mgr.y (mgr.24335) 146 : cluster [DBG] pgmap v122: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:24.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:24 vm00 bash[22468]: audit 2026-03-09T18:25:24.589750+0000 mon.c (mon.1) 106 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:25:24.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:24 vm00 bash[22468]: audit 2026-03-09T18:25:24.591273+0000 mon.c (mon.1) 107 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:25:24.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:24 vm00 bash[22468]: audit 2026-03-09T18:25:24.592105+0000 mon.c (mon.1) 108 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:25:24.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:24 vm00 bash[17468]: audit 2026-03-09T18:25:24.589750+0000 mon.c (mon.1) 106 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:25:24.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:24 vm00 bash[17468]: audit 2026-03-09T18:25:24.591273+0000 mon.c (mon.1) 107 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:25:24.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:24 vm00 bash[17468]: audit 2026-03-09T18:25:24.592105+0000 mon.c (mon.1) 108 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:25:24.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:24 vm08 bash[17774]: audit 2026-03-09T18:25:24.589750+0000 mon.c (mon.1) 106 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:25:24.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:24 vm08 bash[17774]: audit 2026-03-09T18:25:24.591273+0000 mon.c (mon.1) 107 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:25:24.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:24 vm08 bash[17774]: audit 2026-03-09T18:25:24.592105+0000 mon.c (mon.1) 108 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:25:26.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:25 vm00 bash[22468]: cluster 2026-03-09T18:25:24.464325+0000 mgr.y (mgr.24335) 147 : cluster [DBG] pgmap v123: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:26.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:25 vm00 bash[22468]: audit 2026-03-09T18:25:24.774520+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:25:26.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:25 vm00 bash[17468]: cluster 2026-03-09T18:25:24.464325+0000 mgr.y (mgr.24335) 147 : cluster [DBG] pgmap v123: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:26.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:25 vm00 bash[17468]: audit 2026-03-09T18:25:24.774520+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:25:26.132 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:25:25 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:25:25] "GET /metrics HTTP/1.1" 200 207673 "" "Prometheus/2.33.4" 2026-03-09T18:25:26.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:25 vm08 bash[17774]: cluster 2026-03-09T18:25:24.464325+0000 mgr.y (mgr.24335) 147 : cluster [DBG] pgmap v123: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:26.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:25 vm08 bash[17774]: audit 2026-03-09T18:25:24.774520+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:25:27.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:27 vm00 bash[22468]: cluster 2026-03-09T18:25:26.464622+0000 mgr.y (mgr.24335) 148 : cluster [DBG] pgmap v124: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T18:25:27.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:27 vm00 bash[17468]: cluster 2026-03-09T18:25:26.464622+0000 mgr.y (mgr.24335) 148 : cluster [DBG] pgmap v124: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T18:25:27.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:27 vm08 bash[17774]: cluster 2026-03-09T18:25:26.464622+0000 mgr.y (mgr.24335) 148 : cluster [DBG] pgmap v124: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T18:25:28.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:25:28 vm08 bash[18535]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:25:27] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:25:29.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:29 vm00 bash[22468]: cluster 2026-03-09T18:25:28.464897+0000 mgr.y (mgr.24335) 149 : cluster [DBG] pgmap v125: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T18:25:29.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:29 vm00 bash[22468]: audit 2026-03-09T18:25:28.523656+0000 mon.c (mon.1) 109 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:25:29.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:29 vm00 bash[22468]: audit 2026-03-09T18:25:28.523911+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:25:29.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:29 vm00 bash[22468]: audit 2026-03-09T18:25:28.536467+0000 mon.c (mon.1) 110 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:25:29.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:29 vm00 bash[22468]: audit 2026-03-09T18:25:28.536718+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:25:29.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:29 vm00 bash[17468]: cluster 2026-03-09T18:25:28.464897+0000 mgr.y (mgr.24335) 149 : cluster [DBG] pgmap v125: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T18:25:29.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:29 vm00 bash[17468]: audit 2026-03-09T18:25:28.523656+0000 mon.c (mon.1) 109 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:25:29.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:29 vm00 bash[17468]: audit 2026-03-09T18:25:28.523911+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:25:29.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:29 vm00 bash[17468]: audit 2026-03-09T18:25:28.536467+0000 mon.c (mon.1) 110 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:25:29.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:29 vm00 bash[17468]: audit 2026-03-09T18:25:28.536718+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:25:29.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:29 vm08 bash[17774]: cluster 2026-03-09T18:25:28.464897+0000 mgr.y (mgr.24335) 149 : cluster [DBG] pgmap v125: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T18:25:29.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:29 vm08 bash[17774]: audit 2026-03-09T18:25:28.523656+0000 mon.c (mon.1) 109 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:25:29.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:29 vm08 bash[17774]: audit 2026-03-09T18:25:28.523911+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:25:29.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:29 vm08 bash[17774]: audit 2026-03-09T18:25:28.536467+0000 mon.c (mon.1) 110 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:25:29.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:29 vm08 bash[17774]: audit 2026-03-09T18:25:28.536718+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:25:31.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:31 vm00 bash[22468]: cluster 2026-03-09T18:25:30.465458+0000 mgr.y (mgr.24335) 150 : cluster [DBG] pgmap v126: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:31.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:31 vm00 bash[17468]: cluster 2026-03-09T18:25:30.465458+0000 mgr.y (mgr.24335) 150 : cluster [DBG] pgmap v126: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:31.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:31 vm08 bash[17774]: cluster 2026-03-09T18:25:30.465458+0000 mgr.y (mgr.24335) 150 : cluster [DBG] pgmap v126: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:32.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:32 vm00 bash[22468]: audit 2026-03-09T18:25:31.368849+0000 mgr.y (mgr.24335) 151 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:25:32.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:32 vm00 bash[17468]: audit 2026-03-09T18:25:31.368849+0000 mgr.y (mgr.24335) 151 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:25:32.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:32 vm08 bash[17774]: audit 2026-03-09T18:25:31.368849+0000 mgr.y (mgr.24335) 151 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:25:33.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:33 vm00 bash[22468]: cluster 2026-03-09T18:25:32.465776+0000 mgr.y (mgr.24335) 152 : cluster [DBG] pgmap v127: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T18:25:33.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:33 vm00 bash[17468]: cluster 2026-03-09T18:25:32.465776+0000 mgr.y (mgr.24335) 152 : cluster [DBG] pgmap v127: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T18:25:33.882 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:25:33 vm00 bash[42815]: level=error ts=2026-03-09T18:25:33.511Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:25:33.882 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:25:33 vm00 bash[42815]: level=warn ts=2026-03-09T18:25:33.513Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:25:33.882 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:25:33 vm00 bash[42815]: level=warn ts=2026-03-09T18:25:33.514Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:25:33.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:33 vm08 bash[17774]: cluster 2026-03-09T18:25:32.465776+0000 mgr.y (mgr.24335) 152 : cluster [DBG] pgmap v127: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T18:25:36.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:35 vm00 bash[22468]: cluster 2026-03-09T18:25:34.466538+0000 mgr.y (mgr.24335) 153 : cluster [DBG] pgmap v128: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:36.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:35 vm00 bash[17468]: cluster 2026-03-09T18:25:34.466538+0000 mgr.y (mgr.24335) 153 : cluster [DBG] pgmap v128: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:36.132 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:25:35 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:25:35] "GET /metrics HTTP/1.1" 200 207670 "" "Prometheus/2.33.4" 2026-03-09T18:25:36.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:35 vm08 bash[17774]: cluster 2026-03-09T18:25:34.466538+0000 mgr.y (mgr.24335) 153 : cluster [DBG] pgmap v128: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:37.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:37 vm00 bash[22468]: cluster 2026-03-09T18:25:36.466818+0000 mgr.y (mgr.24335) 154 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:37.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:37 vm00 bash[17468]: cluster 2026-03-09T18:25:36.466818+0000 mgr.y (mgr.24335) 154 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:37.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:37 vm08 bash[17774]: cluster 2026-03-09T18:25:36.466818+0000 mgr.y (mgr.24335) 154 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:38.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:25:37 vm08 bash[18535]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:25:37] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:25:39.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:39 vm00 bash[22468]: cluster 2026-03-09T18:25:38.467086+0000 mgr.y (mgr.24335) 155 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:39.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:39 vm00 bash[17468]: cluster 2026-03-09T18:25:38.467086+0000 mgr.y (mgr.24335) 155 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:39.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:39 vm08 bash[17774]: cluster 2026-03-09T18:25:38.467086+0000 mgr.y (mgr.24335) 155 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:41.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:41 vm00 bash[22468]: cluster 2026-03-09T18:25:40.467565+0000 mgr.y (mgr.24335) 156 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:41.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:41 vm00 bash[17468]: cluster 2026-03-09T18:25:40.467565+0000 mgr.y (mgr.24335) 156 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:41.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:41 vm08 bash[17774]: cluster 2026-03-09T18:25:40.467565+0000 mgr.y (mgr.24335) 156 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:42.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:42 vm00 bash[22468]: audit 2026-03-09T18:25:41.375780+0000 mgr.y (mgr.24335) 157 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:25:42.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:42 vm00 bash[17468]: audit 2026-03-09T18:25:41.375780+0000 mgr.y (mgr.24335) 157 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:25:42.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:42 vm08 bash[17774]: audit 2026-03-09T18:25:41.375780+0000 mgr.y (mgr.24335) 157 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:25:43.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:43 vm00 bash[22468]: cluster 2026-03-09T18:25:42.467911+0000 mgr.y (mgr.24335) 158 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:43.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:43 vm00 bash[17468]: cluster 2026-03-09T18:25:42.467911+0000 mgr.y (mgr.24335) 158 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:43.882 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:25:43 vm00 bash[42815]: level=error ts=2026-03-09T18:25:43.513Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:25:43.882 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:25:43 vm00 bash[42815]: level=warn ts=2026-03-09T18:25:43.515Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:25:43.882 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:25:43 vm00 bash[42815]: level=warn ts=2026-03-09T18:25:43.525Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:25:43.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:43 vm08 bash[17774]: cluster 2026-03-09T18:25:42.467911+0000 mgr.y (mgr.24335) 158 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:45.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:45 vm00 bash[22468]: cluster 2026-03-09T18:25:44.468471+0000 mgr.y (mgr.24335) 159 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:45.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:45 vm00 bash[17468]: cluster 2026-03-09T18:25:44.468471+0000 mgr.y (mgr.24335) 159 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:45.882 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:25:45 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:25:45] "GET /metrics HTTP/1.1" 200 207630 "" "Prometheus/2.33.4" 2026-03-09T18:25:45.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:45 vm08 bash[17774]: cluster 2026-03-09T18:25:44.468471+0000 mgr.y (mgr.24335) 159 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:47.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:47 vm00 bash[22468]: cluster 2026-03-09T18:25:46.469016+0000 mgr.y (mgr.24335) 160 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:47.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:47 vm00 bash[17468]: cluster 2026-03-09T18:25:46.469016+0000 mgr.y (mgr.24335) 160 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:47 vm08 bash[17774]: cluster 2026-03-09T18:25:46.469016+0000 mgr.y (mgr.24335) 160 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:48.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:25:47 vm08 bash[18535]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:25:47] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:25:49.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:49 vm00 bash[22468]: cluster 2026-03-09T18:25:48.469324+0000 mgr.y (mgr.24335) 161 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:49.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:49 vm00 bash[17468]: cluster 2026-03-09T18:25:48.469324+0000 mgr.y (mgr.24335) 161 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:49.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:49 vm08 bash[17774]: cluster 2026-03-09T18:25:48.469324+0000 mgr.y (mgr.24335) 161 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:51.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:51 vm00 bash[22468]: cluster 2026-03-09T18:25:50.469860+0000 mgr.y (mgr.24335) 162 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:51.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:51 vm00 bash[17468]: cluster 2026-03-09T18:25:50.469860+0000 mgr.y (mgr.24335) 162 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:51.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:51 vm08 bash[17774]: cluster 2026-03-09T18:25:50.469860+0000 mgr.y (mgr.24335) 162 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:52.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:52 vm00 bash[22468]: audit 2026-03-09T18:25:51.384703+0000 mgr.y (mgr.24335) 163 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:25:52.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:52 vm00 bash[17468]: audit 2026-03-09T18:25:51.384703+0000 mgr.y (mgr.24335) 163 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:25:52.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:52 vm08 bash[17774]: audit 2026-03-09T18:25:51.384703+0000 mgr.y (mgr.24335) 163 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:25:53.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:53 vm00 bash[22468]: cluster 2026-03-09T18:25:52.470135+0000 mgr.y (mgr.24335) 164 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:53.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:53 vm00 bash[17468]: cluster 2026-03-09T18:25:52.470135+0000 mgr.y (mgr.24335) 164 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:53.882 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:25:53 vm00 bash[42815]: level=error ts=2026-03-09T18:25:53.513Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:25:53.882 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:25:53 vm00 bash[42815]: level=warn ts=2026-03-09T18:25:53.515Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:25:53.882 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:25:53 vm00 bash[42815]: level=warn ts=2026-03-09T18:25:53.515Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:25:53.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:53 vm08 bash[17774]: cluster 2026-03-09T18:25:52.470135+0000 mgr.y (mgr.24335) 164 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:55.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:55 vm00 bash[22468]: cluster 2026-03-09T18:25:54.470559+0000 mgr.y (mgr.24335) 165 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:55.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:55 vm00 bash[17468]: cluster 2026-03-09T18:25:54.470559+0000 mgr.y (mgr.24335) 165 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:55.882 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:25:55 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:25:55] "GET /metrics HTTP/1.1" 200 207630 "" "Prometheus/2.33.4" 2026-03-09T18:25:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:55 vm08 bash[17774]: cluster 2026-03-09T18:25:54.470559+0000 mgr.y (mgr.24335) 165 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:25:57.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:57 vm00 bash[22468]: cluster 2026-03-09T18:25:56.470818+0000 mgr.y (mgr.24335) 166 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:57.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:57 vm00 bash[17468]: cluster 2026-03-09T18:25:56.470818+0000 mgr.y (mgr.24335) 166 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:57.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:57 vm08 bash[17774]: cluster 2026-03-09T18:25:56.470818+0000 mgr.y (mgr.24335) 166 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:58.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:25:57 vm08 bash[18535]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:25:57] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:25:59.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:25:59 vm00 bash[22468]: cluster 2026-03-09T18:25:58.471079+0000 mgr.y (mgr.24335) 167 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:59.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:25:59 vm00 bash[17468]: cluster 2026-03-09T18:25:58.471079+0000 mgr.y (mgr.24335) 167 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:25:59.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:25:59 vm08 bash[17774]: cluster 2026-03-09T18:25:58.471079+0000 mgr.y (mgr.24335) 167 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:01 vm00 bash[22468]: cluster 2026-03-09T18:26:00.471636+0000 mgr.y (mgr.24335) 168 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:01.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:01 vm00 bash[17468]: cluster 2026-03-09T18:26:00.471636+0000 mgr.y (mgr.24335) 168 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:01.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:01 vm08 bash[17774]: cluster 2026-03-09T18:26:00.471636+0000 mgr.y (mgr.24335) 168 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:02.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:02 vm08 bash[17774]: audit 2026-03-09T18:26:01.393520+0000 mgr.y (mgr.24335) 169 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:26:03.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:02 vm00 bash[22468]: audit 2026-03-09T18:26:01.393520+0000 mgr.y (mgr.24335) 169 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:26:03.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:02 vm00 bash[17468]: audit 2026-03-09T18:26:01.393520+0000 mgr.y (mgr.24335) 169 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:26:03.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:03 vm00 bash[22468]: cluster 2026-03-09T18:26:02.471995+0000 mgr.y (mgr.24335) 170 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:03.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:03 vm00 bash[17468]: cluster 2026-03-09T18:26:02.471995+0000 mgr.y (mgr.24335) 170 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:03.882 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:26:03 vm00 bash[42815]: level=error ts=2026-03-09T18:26:03.514Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:26:03.882 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:26:03 vm00 bash[42815]: level=warn ts=2026-03-09T18:26:03.516Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:26:03.882 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:26:03 vm00 bash[42815]: level=warn ts=2026-03-09T18:26:03.516Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:26:03.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:03 vm08 bash[17774]: cluster 2026-03-09T18:26:02.471995+0000 mgr.y (mgr.24335) 170 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:05.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:05 vm08 bash[17774]: cluster 2026-03-09T18:26:04.472398+0000 mgr.y (mgr.24335) 171 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:06.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:05 vm00 bash[22468]: cluster 2026-03-09T18:26:04.472398+0000 mgr.y (mgr.24335) 171 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:06.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:05 vm00 bash[17468]: cluster 2026-03-09T18:26:04.472398+0000 mgr.y (mgr.24335) 171 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:06.132 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:26:05 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:26:05] "GET /metrics HTTP/1.1" 200 207616 "" "Prometheus/2.33.4" 2026-03-09T18:26:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:07 vm00 bash[22468]: cluster 2026-03-09T18:26:06.472766+0000 mgr.y (mgr.24335) 172 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:07.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:07 vm00 bash[17468]: cluster 2026-03-09T18:26:06.472766+0000 mgr.y (mgr.24335) 172 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:07.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:07 vm08 bash[17774]: cluster 2026-03-09T18:26:06.472766+0000 mgr.y (mgr.24335) 172 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:08.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:26:07 vm08 bash[18535]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:26:07] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:26:09.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:09 vm00 bash[22468]: cluster 2026-03-09T18:26:08.473132+0000 mgr.y (mgr.24335) 173 : cluster [DBG] pgmap v145: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:09.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:09 vm00 bash[17468]: cluster 2026-03-09T18:26:08.473132+0000 mgr.y (mgr.24335) 173 : cluster [DBG] pgmap v145: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:09.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:09 vm08 bash[17774]: cluster 2026-03-09T18:26:08.473132+0000 mgr.y (mgr.24335) 173 : cluster [DBG] pgmap v145: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:11.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:11 vm00 bash[22468]: cluster 2026-03-09T18:26:10.473722+0000 mgr.y (mgr.24335) 174 : cluster [DBG] pgmap v146: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:11.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:11 vm00 bash[17468]: cluster 2026-03-09T18:26:10.473722+0000 mgr.y (mgr.24335) 174 : cluster [DBG] pgmap v146: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:11.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:11 vm08 bash[17774]: cluster 2026-03-09T18:26:10.473722+0000 mgr.y (mgr.24335) 174 : cluster [DBG] pgmap v146: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:12 vm00 bash[22468]: audit 2026-03-09T18:26:11.400937+0000 mgr.y (mgr.24335) 175 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:26:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:12 vm00 bash[17468]: audit 2026-03-09T18:26:11.400937+0000 mgr.y (mgr.24335) 175 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:26:12.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:12 vm08 bash[17774]: audit 2026-03-09T18:26:11.400937+0000 mgr.y (mgr.24335) 175 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:26:13.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:13 vm00 bash[22468]: cluster 2026-03-09T18:26:12.474054+0000 mgr.y (mgr.24335) 176 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:13.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:13 vm00 bash[17468]: cluster 2026-03-09T18:26:12.474054+0000 mgr.y (mgr.24335) 176 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:13.882 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:26:13 vm00 bash[42815]: level=error ts=2026-03-09T18:26:13.514Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:26:13.882 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:26:13 vm00 bash[42815]: level=warn ts=2026-03-09T18:26:13.516Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:26:13.882 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:26:13 vm00 bash[42815]: level=warn ts=2026-03-09T18:26:13.521Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:26:13.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:13 vm08 bash[17774]: cluster 2026-03-09T18:26:12.474054+0000 mgr.y (mgr.24335) 176 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:15.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:15 vm00 bash[22468]: cluster 2026-03-09T18:26:14.474500+0000 mgr.y (mgr.24335) 177 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:15.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:15 vm00 bash[17468]: cluster 2026-03-09T18:26:14.474500+0000 mgr.y (mgr.24335) 177 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:15.882 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:26:15 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:26:15] "GET /metrics HTTP/1.1" 200 207617 "" "Prometheus/2.33.4" 2026-03-09T18:26:15.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:15 vm08 bash[17774]: cluster 2026-03-09T18:26:14.474500+0000 mgr.y (mgr.24335) 177 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:17.127 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph config set mon mon_warn_on_insecure_global_id_reclaim false --force' 2026-03-09T18:26:17.315 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:17 vm00 bash[22468]: cluster 2026-03-09T18:26:16.474807+0000 mgr.y (mgr.24335) 178 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:17.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:17 vm00 bash[17468]: cluster 2026-03-09T18:26:16.474807+0000 mgr.y (mgr.24335) 178 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:17.610 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false --force' 2026-03-09T18:26:17.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:17 vm08 bash[17774]: cluster 2026-03-09T18:26:16.474807+0000 mgr.y (mgr.24335) 178 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:18.153 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph config set global log_to_journald false --force' 2026-03-09T18:26:18.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:26:17 vm08 bash[18535]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:26:17] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:26:18.685 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-09T18:26:19.206 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:26:19.206 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (3m) 2m ago 3m 13.2M - ba2b418f427c 941abbc9e671 2026-03-09T18:26:19.206 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (3m) 2m ago 3m 41.6M - 8.3.5 dad864ee21e9 771af00209da 2026-03-09T18:26:19.206 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (2m) 2m ago 2m 63.3M - 3.5 e1d6a67b021e d1efcd22ebcc 2026-03-09T18:26:19.206 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443 running (6m) 2m ago 6m 397M - 17.2.0 e1d6a67b021e f2ee8ac80d5d 2026-03-09T18:26:19.206 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:9283 running (7m) 2m ago 7m 440M - 17.2.0 e1d6a67b021e 67bec09a4a4c 2026-03-09T18:26:19.206 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (7m) 2m ago 7m 46.9M 2048M 17.2.0 e1d6a67b021e 819e8890799a 2026-03-09T18:26:19.206 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (6m) 2m ago 6m 43.9M 2048M 17.2.0 e1d6a67b021e 5b51a6d0bbdd 2026-03-09T18:26:19.206 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (6m) 2m ago 6m 44.9M 2048M 17.2.0 e1d6a67b021e a82073bc5d9c 2026-03-09T18:26:19.206 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (3m) 2m ago 3m 7491k - 1dbe0e931976 980c035e4ada 2026-03-09T18:26:19.206 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (3m) 2m ago 3m 8819k - 1dbe0e931976 bba8a2ca502c 2026-03-09T18:26:19.206 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (6m) 2m ago 6m 43.9M 4096M 17.2.0 e1d6a67b021e ab692a994bc3 2026-03-09T18:26:19.206 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (5m) 2m ago 5m 45.8M 4096M 17.2.0 e1d6a67b021e d8607e6df9d6 2026-03-09T18:26:19.206 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (5m) 2m ago 5m 41.5M 4096M 17.2.0 e1d6a67b021e 35e072ab4c22 2026-03-09T18:26:19.206 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (5m) 2m ago 5m 43.2M 4096M 17.2.0 e1d6a67b021e 306d680cc55b 2026-03-09T18:26:19.206 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (4m) 2m ago 4m 43.7M 4096M 17.2.0 e1d6a67b021e 781045b06a16 2026-03-09T18:26:19.206 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (4m) 2m ago 4m 43.1M 4096M 17.2.0 e1d6a67b021e 28b71e1b7c1b 2026-03-09T18:26:19.206 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (4m) 2m ago 4m 41.8M 4096M 17.2.0 e1d6a67b021e 80e1a58dd2f5 2026-03-09T18:26:19.206 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (4m) 2m ago 4m 42.5M 4096M 17.2.0 e1d6a67b021e 4f91765b51cf 2026-03-09T18:26:19.206 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (3m) 2m ago 3m 34.7M - 514e6a882f6e 4ab95bb45c38 2026-03-09T18:26:19.206 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (3m) 2m ago 3m 81.4M - 17.2.0 e1d6a67b021e 671fa80b7e00 2026-03-09T18:26:19.206 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (3m) 2m ago 3m 81.0M - 17.2.0 e1d6a67b021e 1fbcce983317 2026-03-09T18:26:19.264 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions' 2026-03-09T18:26:19.753 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:26:19.753 INFO:teuthology.orchestra.run.vm00.stdout: "mon": { 2026-03-09T18:26:19.753 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-09T18:26:19.753 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:26:19.753 INFO:teuthology.orchestra.run.vm00.stdout: "mgr": { 2026-03-09T18:26:19.753 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-09T18:26:19.753 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:26:19.753 INFO:teuthology.orchestra.run.vm00.stdout: "osd": { 2026-03-09T18:26:19.753 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-09T18:26:19.753 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:26:19.753 INFO:teuthology.orchestra.run.vm00.stdout: "mds": {}, 2026-03-09T18:26:19.753 INFO:teuthology.orchestra.run.vm00.stdout: "rgw": { 2026-03-09T18:26:19.753 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-09T18:26:19.753 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:26:19.753 INFO:teuthology.orchestra.run.vm00.stdout: "overall": { 2026-03-09T18:26:19.754 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 15 2026-03-09T18:26:19.754 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:26:19.754 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:26:19.774 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:19 vm00 bash[22468]: cluster 2026-03-09T18:26:18.475223+0000 mgr.y (mgr.24335) 179 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:19.774 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:19 vm00 bash[17468]: cluster 2026-03-09T18:26:18.475223+0000 mgr.y (mgr.24335) 179 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:19.812 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph -s' 2026-03-09T18:26:19.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:19 vm08 bash[17774]: cluster 2026-03-09T18:26:18.475223+0000 mgr.y (mgr.24335) 179 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:20.301 INFO:teuthology.orchestra.run.vm00.stdout: cluster: 2026-03-09T18:26:20.301 INFO:teuthology.orchestra.run.vm00.stdout: id: 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:26:20.301 INFO:teuthology.orchestra.run.vm00.stdout: health: HEALTH_OK 2026-03-09T18:26:20.301 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:26:20.301 INFO:teuthology.orchestra.run.vm00.stdout: services: 2026-03-09T18:26:20.301 INFO:teuthology.orchestra.run.vm00.stdout: mon: 3 daemons, quorum a,c,b (age 6m) 2026-03-09T18:26:20.301 INFO:teuthology.orchestra.run.vm00.stdout: mgr: y(active, since 3m), standbys: x 2026-03-09T18:26:20.301 INFO:teuthology.orchestra.run.vm00.stdout: osd: 8 osds: 8 up (since 4m), 8 in (since 4m) 2026-03-09T18:26:20.301 INFO:teuthology.orchestra.run.vm00.stdout: rgw: 2 daemons active (2 hosts, 1 zones) 2026-03-09T18:26:20.301 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:26:20.301 INFO:teuthology.orchestra.run.vm00.stdout: data: 2026-03-09T18:26:20.301 INFO:teuthology.orchestra.run.vm00.stdout: pools: 6 pools, 161 pgs 2026-03-09T18:26:20.301 INFO:teuthology.orchestra.run.vm00.stdout: objects: 209 objects, 457 KiB 2026-03-09T18:26:20.301 INFO:teuthology.orchestra.run.vm00.stdout: usage: 72 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:26:20.302 INFO:teuthology.orchestra.run.vm00.stdout: pgs: 161 active+clean 2026-03-09T18:26:20.302 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:26:20.302 INFO:teuthology.orchestra.run.vm00.stdout: io: 2026-03-09T18:26:20.302 INFO:teuthology.orchestra.run.vm00.stdout: client: 853 B/s rd, 0 op/s rd, 0 op/s wr 2026-03-09T18:26:20.302 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:26:20.359 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ls' 2026-03-09T18:26:20.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:20 vm00 bash[22468]: audit 2026-03-09T18:26:19.201675+0000 mgr.y (mgr.24335) 180 : audit [DBG] from='client.24718 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:26:20.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:20 vm00 bash[22468]: audit 2026-03-09T18:26:19.753271+0000 mon.b (mon.2) 39 : audit [DBG] from='client.? 192.168.123.100:0/4024283826' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:26:20.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:20 vm00 bash[22468]: audit 2026-03-09T18:26:20.302051+0000 mon.a (mon.0) 730 : audit [DBG] from='client.? 192.168.123.100:0/3171685644' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T18:26:20.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:20 vm00 bash[17468]: audit 2026-03-09T18:26:19.201675+0000 mgr.y (mgr.24335) 180 : audit [DBG] from='client.24718 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:26:20.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:20 vm00 bash[17468]: audit 2026-03-09T18:26:19.753271+0000 mon.b (mon.2) 39 : audit [DBG] from='client.? 192.168.123.100:0/4024283826' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:26:20.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:20 vm00 bash[17468]: audit 2026-03-09T18:26:20.302051+0000 mon.a (mon.0) 730 : audit [DBG] from='client.? 192.168.123.100:0/3171685644' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T18:26:20.851 INFO:teuthology.orchestra.run.vm00.stdout:NAME PORTS RUNNING REFRESHED AGE PLACEMENT 2026-03-09T18:26:20.851 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager ?:9093,9094 1/1 2m ago 3m vm00=a;count:1 2026-03-09T18:26:20.851 INFO:teuthology.orchestra.run.vm00.stdout:grafana ?:3000 1/1 3m ago 3m vm08=a;count:1 2026-03-09T18:26:20.852 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo 1/1 2m ago 3m count:1 2026-03-09T18:26:20.852 INFO:teuthology.orchestra.run.vm00.stdout:mgr 2/2 3m ago 6m vm00=y;vm08=x;count:2 2026-03-09T18:26:20.852 INFO:teuthology.orchestra.run.vm00.stdout:mon 3/3 3m ago 6m vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm08:192.168.123.108=b;count:3 2026-03-09T18:26:20.852 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter ?:9100 2/2 3m ago 3m vm00=a;vm08=b;count:2 2026-03-09T18:26:20.852 INFO:teuthology.orchestra.run.vm00.stdout:osd 8 3m ago - 2026-03-09T18:26:20.852 INFO:teuthology.orchestra.run.vm00.stdout:prometheus ?:9095 1/1 3m ago 3m vm08=a;count:1 2026-03-09T18:26:20.852 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo ?:8000 2/2 3m ago 3m count:2 2026-03-09T18:26:20.912 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch daemon redeploy "mgr.$(ceph mgr dump -f json | jq .standbys | jq .[] | jq -r .name)" --image quay.ceph.io/ceph-ci/ceph:$sha1' 2026-03-09T18:26:20.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:20 vm08 bash[17774]: audit 2026-03-09T18:26:19.201675+0000 mgr.y (mgr.24335) 180 : audit [DBG] from='client.24718 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:26:20.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:20 vm08 bash[17774]: audit 2026-03-09T18:26:19.753271+0000 mon.b (mon.2) 39 : audit [DBG] from='client.? 192.168.123.100:0/4024283826' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:26:20.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:20 vm08 bash[17774]: audit 2026-03-09T18:26:20.302051+0000 mon.a (mon.0) 730 : audit [DBG] from='client.? 192.168.123.100:0/3171685644' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T18:26:21.638 INFO:teuthology.orchestra.run.vm00.stdout:Scheduled to redeploy mgr.x on host 'vm08' 2026-03-09T18:26:21.669 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:21 vm00 bash[17468]: cluster 2026-03-09T18:26:20.475770+0000 mgr.y (mgr.24335) 181 : cluster [DBG] pgmap v151: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:21.669 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:21 vm00 bash[17468]: audit 2026-03-09T18:26:20.850380+0000 mgr.y (mgr.24335) 182 : audit [DBG] from='client.24730 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:26:21.669 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:21 vm00 bash[17468]: audit 2026-03-09T18:26:21.429702+0000 mon.b (mon.2) 40 : audit [DBG] from='client.? 192.168.123.100:0/326445844' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T18:26:21.669 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:21 vm00 bash[22468]: cluster 2026-03-09T18:26:20.475770+0000 mgr.y (mgr.24335) 181 : cluster [DBG] pgmap v151: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:21.669 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:21 vm00 bash[22468]: audit 2026-03-09T18:26:20.850380+0000 mgr.y (mgr.24335) 182 : audit [DBG] from='client.24730 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:26:21.669 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:21 vm00 bash[22468]: audit 2026-03-09T18:26:21.429702+0000 mon.b (mon.2) 40 : audit [DBG] from='client.? 192.168.123.100:0/326445844' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T18:26:21.737 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps --refresh' 2026-03-09T18:26:21.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:21 vm08 bash[17774]: cluster 2026-03-09T18:26:20.475770+0000 mgr.y (mgr.24335) 181 : cluster [DBG] pgmap v151: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:21.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:21 vm08 bash[17774]: audit 2026-03-09T18:26:20.850380+0000 mgr.y (mgr.24335) 182 : audit [DBG] from='client.24730 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:26:21.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:21 vm08 bash[17774]: audit 2026-03-09T18:26:21.429702+0000 mon.b (mon.2) 40 : audit [DBG] from='client.? 192.168.123.100:0/326445844' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T18:26:22.216 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:26:22.216 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (3m) 2m ago 3m 13.2M - ba2b418f427c 941abbc9e671 2026-03-09T18:26:22.216 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (3m) 3m ago 3m 41.6M - 8.3.5 dad864ee21e9 771af00209da 2026-03-09T18:26:22.216 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (3m) 2m ago 3m 63.3M - 3.5 e1d6a67b021e d1efcd22ebcc 2026-03-09T18:26:22.216 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443 running (6m) 3m ago 6m 397M - 17.2.0 e1d6a67b021e f2ee8ac80d5d 2026-03-09T18:26:22.216 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:9283 running (7m) 2m ago 7m 440M - 17.2.0 e1d6a67b021e 67bec09a4a4c 2026-03-09T18:26:22.216 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (7m) 2m ago 7m 46.9M 2048M 17.2.0 e1d6a67b021e 819e8890799a 2026-03-09T18:26:22.216 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (6m) 3m ago 6m 43.9M 2048M 17.2.0 e1d6a67b021e 5b51a6d0bbdd 2026-03-09T18:26:22.216 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (6m) 2m ago 6m 44.9M 2048M 17.2.0 e1d6a67b021e a82073bc5d9c 2026-03-09T18:26:22.216 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (3m) 2m ago 3m 7491k - 1dbe0e931976 980c035e4ada 2026-03-09T18:26:22.216 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (3m) 3m ago 3m 8819k - 1dbe0e931976 bba8a2ca502c 2026-03-09T18:26:22.216 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (6m) 2m ago 6m 43.9M 4096M 17.2.0 e1d6a67b021e ab692a994bc3 2026-03-09T18:26:22.216 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (5m) 2m ago 5m 45.8M 4096M 17.2.0 e1d6a67b021e d8607e6df9d6 2026-03-09T18:26:22.216 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (5m) 2m ago 5m 41.5M 4096M 17.2.0 e1d6a67b021e 35e072ab4c22 2026-03-09T18:26:22.216 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (5m) 2m ago 5m 43.2M 4096M 17.2.0 e1d6a67b021e 306d680cc55b 2026-03-09T18:26:22.216 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (5m) 3m ago 5m 43.7M 4096M 17.2.0 e1d6a67b021e 781045b06a16 2026-03-09T18:26:22.216 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (4m) 3m ago 4m 43.1M 4096M 17.2.0 e1d6a67b021e 28b71e1b7c1b 2026-03-09T18:26:22.216 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (4m) 3m ago 4m 41.8M 4096M 17.2.0 e1d6a67b021e 80e1a58dd2f5 2026-03-09T18:26:22.216 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (4m) 3m ago 4m 42.5M 4096M 17.2.0 e1d6a67b021e 4f91765b51cf 2026-03-09T18:26:22.216 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (3m) 3m ago 3m 34.7M - 514e6a882f6e 4ab95bb45c38 2026-03-09T18:26:22.216 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (3m) 2m ago 3m 81.4M - 17.2.0 e1d6a67b021e 671fa80b7e00 2026-03-09T18:26:22.217 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (3m) 3m ago 3m 81.0M - 17.2.0 e1d6a67b021e 1fbcce983317 2026-03-09T18:26:22.287 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'sleep 180' 2026-03-09T18:26:22.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:22 vm00 bash[22468]: audit 2026-03-09T18:26:21.410957+0000 mgr.y (mgr.24335) 183 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:26:22.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:22 vm00 bash[22468]: audit 2026-03-09T18:26:21.631141+0000 mgr.y (mgr.24335) 184 : audit [DBG] from='client.14856 -' entity='client.admin' cmd=[{"prefix": "orch daemon redeploy", "name": "mgr.x", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:26:22.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:22 vm00 bash[22468]: audit 2026-03-09T18:26:21.638440+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:26:22.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:22 vm00 bash[22468]: cephadm 2026-03-09T18:26:21.639149+0000 mgr.y (mgr.24335) 185 : cephadm [INF] Schedule redeploy daemon mgr.x 2026-03-09T18:26:22.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:22 vm00 bash[22468]: audit 2026-03-09T18:26:21.641132+0000 mon.c (mon.1) 111 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:26:22.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:22 vm00 bash[17468]: audit 2026-03-09T18:26:21.410957+0000 mgr.y (mgr.24335) 183 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:26:22.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:22 vm00 bash[17468]: audit 2026-03-09T18:26:21.631141+0000 mgr.y (mgr.24335) 184 : audit [DBG] from='client.14856 -' entity='client.admin' cmd=[{"prefix": "orch daemon redeploy", "name": "mgr.x", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:26:22.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:22 vm00 bash[17468]: audit 2026-03-09T18:26:21.638440+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:26:22.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:22 vm00 bash[17468]: cephadm 2026-03-09T18:26:21.639149+0000 mgr.y (mgr.24335) 185 : cephadm [INF] Schedule redeploy daemon mgr.x 2026-03-09T18:26:22.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:22 vm00 bash[17468]: audit 2026-03-09T18:26:21.641132+0000 mon.c (mon.1) 111 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:26:22.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:22 vm08 bash[17774]: audit 2026-03-09T18:26:21.410957+0000 mgr.y (mgr.24335) 183 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:26:22.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:22 vm08 bash[17774]: audit 2026-03-09T18:26:21.631141+0000 mgr.y (mgr.24335) 184 : audit [DBG] from='client.14856 -' entity='client.admin' cmd=[{"prefix": "orch daemon redeploy", "name": "mgr.x", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:26:22.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:22 vm08 bash[17774]: audit 2026-03-09T18:26:21.638440+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:26:22.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:22 vm08 bash[17774]: cephadm 2026-03-09T18:26:21.639149+0000 mgr.y (mgr.24335) 185 : cephadm [INF] Schedule redeploy daemon mgr.x 2026-03-09T18:26:22.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:22 vm08 bash[17774]: audit 2026-03-09T18:26:21.641132+0000 mon.c (mon.1) 111 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:26:23.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:23 vm00 bash[22468]: audit 2026-03-09T18:26:22.210416+0000 mgr.y (mgr.24335) 186 : audit [DBG] from='client.24686 -' entity='client.admin' cmd=[{"prefix": "orch ps", "refresh": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:26:23.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:23 vm00 bash[22468]: cluster 2026-03-09T18:26:22.476129+0000 mgr.y (mgr.24335) 187 : cluster [DBG] pgmap v152: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:23.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:23 vm00 bash[17468]: audit 2026-03-09T18:26:22.210416+0000 mgr.y (mgr.24335) 186 : audit [DBG] from='client.24686 -' entity='client.admin' cmd=[{"prefix": "orch ps", "refresh": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:26:23.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:23 vm00 bash[17468]: cluster 2026-03-09T18:26:22.476129+0000 mgr.y (mgr.24335) 187 : cluster [DBG] pgmap v152: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:23.882 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:26:23 vm00 bash[42815]: level=error ts=2026-03-09T18:26:23.515Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:26:23.882 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:26:23 vm00 bash[42815]: level=warn ts=2026-03-09T18:26:23.517Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:26:23.882 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:26:23 vm00 bash[42815]: level=warn ts=2026-03-09T18:26:23.517Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:26:23.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:23 vm08 bash[17774]: audit 2026-03-09T18:26:22.210416+0000 mgr.y (mgr.24335) 186 : audit [DBG] from='client.24686 -' entity='client.admin' cmd=[{"prefix": "orch ps", "refresh": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:26:23.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:23 vm08 bash[17774]: cluster 2026-03-09T18:26:22.476129+0000 mgr.y (mgr.24335) 187 : cluster [DBG] pgmap v152: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:25.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:25 vm08 bash[17774]: cluster 2026-03-09T18:26:24.476818+0000 mgr.y (mgr.24335) 188 : cluster [DBG] pgmap v153: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:26.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:25 vm00 bash[22468]: cluster 2026-03-09T18:26:24.476818+0000 mgr.y (mgr.24335) 188 : cluster [DBG] pgmap v153: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:26.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:25 vm00 bash[17468]: cluster 2026-03-09T18:26:24.476818+0000 mgr.y (mgr.24335) 188 : cluster [DBG] pgmap v153: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:26.132 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:26:25 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:26:25] "GET /metrics HTTP/1.1" 200 207617 "" "Prometheus/2.33.4" 2026-03-09T18:26:27.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:27 vm00 bash[22468]: cluster 2026-03-09T18:26:26.477319+0000 mgr.y (mgr.24335) 189 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:27.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:27 vm00 bash[17468]: cluster 2026-03-09T18:26:26.477319+0000 mgr.y (mgr.24335) 189 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:27.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:27 vm08 bash[17774]: cluster 2026-03-09T18:26:26.477319+0000 mgr.y (mgr.24335) 189 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:28.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:26:27 vm08 bash[18535]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:26:27] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:26:28.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:28 vm00 bash[22468]: audit 2026-03-09T18:26:28.525677+0000 mon.c (mon.1) 112 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:26:28.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:28 vm00 bash[22468]: audit 2026-03-09T18:26:28.526003+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:26:28.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:28 vm00 bash[22468]: audit 2026-03-09T18:26:28.538365+0000 mon.c (mon.1) 113 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:26:28.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:28 vm00 bash[22468]: audit 2026-03-09T18:26:28.538624+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:26:28.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:28 vm00 bash[17468]: audit 2026-03-09T18:26:28.525677+0000 mon.c (mon.1) 112 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:26:28.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:28 vm00 bash[17468]: audit 2026-03-09T18:26:28.526003+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:26:28.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:28 vm00 bash[17468]: audit 2026-03-09T18:26:28.538365+0000 mon.c (mon.1) 113 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:26:28.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:28 vm00 bash[17468]: audit 2026-03-09T18:26:28.538624+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:26:28.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:28 vm08 bash[17774]: audit 2026-03-09T18:26:28.525677+0000 mon.c (mon.1) 112 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:26:28.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:28 vm08 bash[17774]: audit 2026-03-09T18:26:28.526003+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:26:28.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:28 vm08 bash[17774]: audit 2026-03-09T18:26:28.538365+0000 mon.c (mon.1) 113 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:26:28.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:28 vm08 bash[17774]: audit 2026-03-09T18:26:28.538624+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:26:30.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:30 vm08 bash[17774]: cluster 2026-03-09T18:26:28.477645+0000 mgr.y (mgr.24335) 190 : cluster [DBG] pgmap v155: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:30.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:30 vm00 bash[22468]: cluster 2026-03-09T18:26:28.477645+0000 mgr.y (mgr.24335) 190 : cluster [DBG] pgmap v155: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:30.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:30 vm00 bash[17468]: cluster 2026-03-09T18:26:28.477645+0000 mgr.y (mgr.24335) 190 : cluster [DBG] pgmap v155: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:31.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:31 vm08 bash[17774]: cluster 2026-03-09T18:26:30.478262+0000 mgr.y (mgr.24335) 191 : cluster [DBG] pgmap v156: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:31.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:31 vm00 bash[17468]: cluster 2026-03-09T18:26:30.478262+0000 mgr.y (mgr.24335) 191 : cluster [DBG] pgmap v156: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:31.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:31 vm00 bash[22468]: cluster 2026-03-09T18:26:30.478262+0000 mgr.y (mgr.24335) 191 : cluster [DBG] pgmap v156: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:32.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:32 vm00 bash[17468]: audit 2026-03-09T18:26:31.418518+0000 mgr.y (mgr.24335) 192 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:26:32.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:32 vm00 bash[22468]: audit 2026-03-09T18:26:31.418518+0000 mgr.y (mgr.24335) 192 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:26:32.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:32 vm08 bash[17774]: audit 2026-03-09T18:26:31.418518+0000 mgr.y (mgr.24335) 192 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:26:33.517 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:33 vm00 bash[22468]: cluster 2026-03-09T18:26:32.478515+0000 mgr.y (mgr.24335) 193 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:33.517 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:33 vm00 bash[17468]: cluster 2026-03-09T18:26:32.478515+0000 mgr.y (mgr.24335) 193 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:33.517 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:26:33 vm00 bash[42815]: level=error ts=2026-03-09T18:26:33.516Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:26:33.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:33 vm08 bash[17774]: cluster 2026-03-09T18:26:32.478515+0000 mgr.y (mgr.24335) 193 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:33.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:26:33 vm00 bash[42815]: level=warn ts=2026-03-09T18:26:33.517Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:26:33.882 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:26:33 vm00 bash[42815]: level=warn ts=2026-03-09T18:26:33.518Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:26:35.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:35 vm00 bash[22468]: cluster 2026-03-09T18:26:34.479073+0000 mgr.y (mgr.24335) 194 : cluster [DBG] pgmap v158: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:35.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:35 vm00 bash[17468]: cluster 2026-03-09T18:26:34.479073+0000 mgr.y (mgr.24335) 194 : cluster [DBG] pgmap v158: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:35.882 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:26:35 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:26:35] "GET /metrics HTTP/1.1" 200 207617 "" "Prometheus/2.33.4" 2026-03-09T18:26:35.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:35 vm08 bash[17774]: cluster 2026-03-09T18:26:34.479073+0000 mgr.y (mgr.24335) 194 : cluster [DBG] pgmap v158: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:37.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:37 vm00 bash[22468]: cluster 2026-03-09T18:26:36.479453+0000 mgr.y (mgr.24335) 195 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:37.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:37 vm00 bash[17468]: cluster 2026-03-09T18:26:36.479453+0000 mgr.y (mgr.24335) 195 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:37.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:37 vm08 bash[17774]: cluster 2026-03-09T18:26:36.479453+0000 mgr.y (mgr.24335) 195 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:38.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:26:37 vm08 bash[18535]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:26:37] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:26:40.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:39 vm00 bash[22468]: cluster 2026-03-09T18:26:38.479836+0000 mgr.y (mgr.24335) 196 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:40.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:39 vm00 bash[17468]: cluster 2026-03-09T18:26:38.479836+0000 mgr.y (mgr.24335) 196 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:40.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:39 vm08 bash[17774]: cluster 2026-03-09T18:26:38.479836+0000 mgr.y (mgr.24335) 196 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:42.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:41 vm00 bash[22468]: cluster 2026-03-09T18:26:40.480419+0000 mgr.y (mgr.24335) 197 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:42.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:41 vm00 bash[17468]: cluster 2026-03-09T18:26:40.480419+0000 mgr.y (mgr.24335) 197 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:42.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:41 vm08 bash[17774]: cluster 2026-03-09T18:26:40.480419+0000 mgr.y (mgr.24335) 197 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:42 vm00 bash[17468]: audit 2026-03-09T18:26:41.426653+0000 mgr.y (mgr.24335) 198 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:26:43.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:42 vm00 bash[22468]: audit 2026-03-09T18:26:41.426653+0000 mgr.y (mgr.24335) 198 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:26:43.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:42 vm08 bash[17774]: audit 2026-03-09T18:26:41.426653+0000 mgr.y (mgr.24335) 198 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:26:43.876 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:26:43 vm00 bash[42815]: level=error ts=2026-03-09T18:26:43.517Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:26:43.876 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:26:43 vm00 bash[42815]: level=warn ts=2026-03-09T18:26:43.519Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:26:43.876 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:26:43 vm00 bash[42815]: level=warn ts=2026-03-09T18:26:43.521Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:26:44.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:43 vm00 bash[22468]: cluster 2026-03-09T18:26:42.480817+0000 mgr.y (mgr.24335) 199 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:44.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:43 vm00 bash[17468]: cluster 2026-03-09T18:26:42.480817+0000 mgr.y (mgr.24335) 199 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:44.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:43 vm08 bash[17774]: cluster 2026-03-09T18:26:42.480817+0000 mgr.y (mgr.24335) 199 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:46.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:45 vm00 bash[22468]: cluster 2026-03-09T18:26:44.481629+0000 mgr.y (mgr.24335) 200 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:46.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:45 vm00 bash[17468]: cluster 2026-03-09T18:26:44.481629+0000 mgr.y (mgr.24335) 200 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:46.131 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:26:45 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:26:45] "GET /metrics HTTP/1.1" 200 207606 "" "Prometheus/2.33.4" 2026-03-09T18:26:46.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:45 vm08 bash[17774]: cluster 2026-03-09T18:26:44.481629+0000 mgr.y (mgr.24335) 200 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:47.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:47 vm00 bash[22468]: cluster 2026-03-09T18:26:46.481985+0000 mgr.y (mgr.24335) 201 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:47.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:47 vm00 bash[17468]: cluster 2026-03-09T18:26:46.481985+0000 mgr.y (mgr.24335) 201 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:47 vm08 bash[17774]: cluster 2026-03-09T18:26:46.481985+0000 mgr.y (mgr.24335) 201 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:48.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:26:47 vm08 bash[18535]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:26:47] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:26:49.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:49 vm00 bash[22468]: cluster 2026-03-09T18:26:48.482278+0000 mgr.y (mgr.24335) 202 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:49.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:49 vm00 bash[17468]: cluster 2026-03-09T18:26:48.482278+0000 mgr.y (mgr.24335) 202 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:49.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:49 vm08 bash[17774]: cluster 2026-03-09T18:26:48.482278+0000 mgr.y (mgr.24335) 202 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:51.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:51 vm00 bash[22468]: cluster 2026-03-09T18:26:50.482986+0000 mgr.y (mgr.24335) 203 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:51.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:51 vm00 bash[17468]: cluster 2026-03-09T18:26:50.482986+0000 mgr.y (mgr.24335) 203 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:51.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:51 vm08 bash[17774]: cluster 2026-03-09T18:26:50.482986+0000 mgr.y (mgr.24335) 203 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:52.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:52 vm00 bash[22468]: audit 2026-03-09T18:26:51.435081+0000 mgr.y (mgr.24335) 204 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:26:52.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:52 vm00 bash[17468]: audit 2026-03-09T18:26:51.435081+0000 mgr.y (mgr.24335) 204 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:26:52.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:52 vm08 bash[17774]: audit 2026-03-09T18:26:51.435081+0000 mgr.y (mgr.24335) 204 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:26:53.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:53 vm00 bash[22468]: cluster 2026-03-09T18:26:52.483336+0000 mgr.y (mgr.24335) 205 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:53.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:53 vm00 bash[17468]: cluster 2026-03-09T18:26:52.483336+0000 mgr.y (mgr.24335) 205 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:53.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:26:53 vm00 bash[42815]: level=error ts=2026-03-09T18:26:53.518Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:26:53.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:26:53 vm00 bash[42815]: level=warn ts=2026-03-09T18:26:53.520Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:26:53.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:26:53 vm00 bash[42815]: level=warn ts=2026-03-09T18:26:53.520Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:26:53.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:53 vm08 bash[17774]: cluster 2026-03-09T18:26:52.483336+0000 mgr.y (mgr.24335) 205 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:55.994 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:26:55 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:26:55] "GET /metrics HTTP/1.1" 200 207606 "" "Prometheus/2.33.4" 2026-03-09T18:26:56.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:55 vm00 bash[17468]: cluster 2026-03-09T18:26:54.483879+0000 mgr.y (mgr.24335) 206 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:56.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:56 vm00 bash[22468]: cluster 2026-03-09T18:26:54.483879+0000 mgr.y (mgr.24335) 206 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:56.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:55 vm08 bash[17774]: cluster 2026-03-09T18:26:54.483879+0000 mgr.y (mgr.24335) 206 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:26:57.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:26:57 vm00 bash[17468]: cluster 2026-03-09T18:26:56.484449+0000 mgr.y (mgr.24335) 207 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:57.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:26:57 vm00 bash[22468]: cluster 2026-03-09T18:26:56.484449+0000 mgr.y (mgr.24335) 207 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:57.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:26:57 vm08 bash[17774]: cluster 2026-03-09T18:26:56.484449+0000 mgr.y (mgr.24335) 207 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:26:58.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:26:57 vm08 bash[18535]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:26:57] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:27:00.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:00 vm00 bash[22468]: cluster 2026-03-09T18:26:58.484748+0000 mgr.y (mgr.24335) 208 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:00.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:00 vm00 bash[17468]: cluster 2026-03-09T18:26:58.484748+0000 mgr.y (mgr.24335) 208 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:00.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:00 vm08 bash[17774]: cluster 2026-03-09T18:26:58.484748+0000 mgr.y (mgr.24335) 208 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:02.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:01 vm00 bash[17468]: cluster 2026-03-09T18:27:00.485398+0000 mgr.y (mgr.24335) 209 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:02.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:01 vm00 bash[17468]: audit 2026-03-09T18:27:00.838291+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:27:02.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:01 vm00 bash[17468]: audit 2026-03-09T18:27:00.839139+0000 mon.c (mon.1) 114 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:02.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:01 vm00 bash[17468]: audit 2026-03-09T18:27:00.839724+0000 mon.c (mon.1) 115 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:02.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:01 vm00 bash[22468]: cluster 2026-03-09T18:27:00.485398+0000 mgr.y (mgr.24335) 209 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:02.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:01 vm00 bash[22468]: audit 2026-03-09T18:27:00.838291+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:27:02.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:01 vm00 bash[22468]: audit 2026-03-09T18:27:00.839139+0000 mon.c (mon.1) 114 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:02.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:01 vm00 bash[22468]: audit 2026-03-09T18:27:00.839724+0000 mon.c (mon.1) 115 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:02.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:01 vm08 bash[17774]: cluster 2026-03-09T18:27:00.485398+0000 mgr.y (mgr.24335) 209 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:02.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:01 vm08 bash[17774]: audit 2026-03-09T18:27:00.838291+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:27:02.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:01 vm08 bash[17774]: audit 2026-03-09T18:27:00.839139+0000 mon.c (mon.1) 114 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:02.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:01 vm08 bash[17774]: audit 2026-03-09T18:27:00.839724+0000 mon.c (mon.1) 115 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:03.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:02 vm00 bash[17468]: audit 2026-03-09T18:27:01.445859+0000 mgr.y (mgr.24335) 210 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:27:03.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:02 vm00 bash[22468]: audit 2026-03-09T18:27:01.445859+0000 mgr.y (mgr.24335) 210 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:27:03.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:02 vm08 bash[17774]: audit 2026-03-09T18:27:01.445859+0000 mgr.y (mgr.24335) 210 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:27:03.848 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:27:03 vm00 bash[42815]: level=error ts=2026-03-09T18:27:03.519Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:27:03.848 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:27:03 vm00 bash[42815]: level=warn ts=2026-03-09T18:27:03.521Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:27:03.848 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:27:03 vm00 bash[42815]: level=warn ts=2026-03-09T18:27:03.522Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:27:04.113 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:03 vm08 bash[17774]: cluster 2026-03-09T18:27:02.485709+0000 mgr.y (mgr.24335) 211 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:04.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:03 vm00 bash[17468]: cluster 2026-03-09T18:27:02.485709+0000 mgr.y (mgr.24335) 211 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:04.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:03 vm00 bash[22468]: cluster 2026-03-09T18:27:02.485709+0000 mgr.y (mgr.24335) 211 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:04.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:04 vm08 bash[17774]: audit 2026-03-09T18:27:03.958421+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:27:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:04 vm08 bash[17774]: audit 2026-03-09T18:27:04.033919+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:27:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:04 vm08 bash[17774]: audit 2026-03-09T18:27:04.311429+0000 mon.a (mon.0) 737 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:27:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:04 vm08 bash[17774]: audit 2026-03-09T18:27:04.315093+0000 mon.c (mon.1) 116 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:27:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:04 vm08 bash[17774]: audit 2026-03-09T18:27:04.315412+0000 mon.a (mon.0) 738 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:27:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:04 vm08 bash[17774]: audit 2026-03-09T18:27:04.316452+0000 mon.c (mon.1) 117 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:27:04.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:04 vm08 bash[17774]: audit 2026-03-09T18:27:04.317462+0000 mon.c (mon.1) 118 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:05.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:04 vm00 bash[22468]: audit 2026-03-09T18:27:03.958421+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:27:05.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:04 vm00 bash[22468]: audit 2026-03-09T18:27:04.033919+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:27:05.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:04 vm00 bash[22468]: audit 2026-03-09T18:27:04.311429+0000 mon.a (mon.0) 737 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:27:05.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:04 vm00 bash[22468]: audit 2026-03-09T18:27:04.315093+0000 mon.c (mon.1) 116 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:27:05.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:04 vm00 bash[22468]: audit 2026-03-09T18:27:04.315412+0000 mon.a (mon.0) 738 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:27:05.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:04 vm00 bash[22468]: audit 2026-03-09T18:27:04.316452+0000 mon.c (mon.1) 117 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:27:05.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:04 vm00 bash[22468]: audit 2026-03-09T18:27:04.317462+0000 mon.c (mon.1) 118 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:05.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:04 vm00 bash[17468]: audit 2026-03-09T18:27:03.958421+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:27:05.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:04 vm00 bash[17468]: audit 2026-03-09T18:27:04.033919+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:27:05.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:04 vm00 bash[17468]: audit 2026-03-09T18:27:04.311429+0000 mon.a (mon.0) 737 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:27:05.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:04 vm00 bash[17468]: audit 2026-03-09T18:27:04.315093+0000 mon.c (mon.1) 116 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:27:05.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:04 vm00 bash[17468]: audit 2026-03-09T18:27:04.315412+0000 mon.a (mon.0) 738 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:27:05.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:04 vm00 bash[17468]: audit 2026-03-09T18:27:04.316452+0000 mon.c (mon.1) 117 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:27:05.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:04 vm00 bash[17468]: audit 2026-03-09T18:27:04.317462+0000 mon.c (mon.1) 118 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:06.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:05 vm00 bash[22468]: cephadm 2026-03-09T18:27:04.318386+0000 mgr.y (mgr.24335) 212 : cephadm [INF] Deploying daemon mgr.x on vm08 2026-03-09T18:27:06.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:05 vm00 bash[22468]: cluster 2026-03-09T18:27:04.486134+0000 mgr.y (mgr.24335) 213 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:06.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:05 vm00 bash[17468]: cephadm 2026-03-09T18:27:04.318386+0000 mgr.y (mgr.24335) 212 : cephadm [INF] Deploying daemon mgr.x on vm08 2026-03-09T18:27:06.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:05 vm00 bash[17468]: cluster 2026-03-09T18:27:04.486134+0000 mgr.y (mgr.24335) 213 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:06.132 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:27:05 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:27:05] "GET /metrics HTTP/1.1" 200 207612 "" "Prometheus/2.33.4" 2026-03-09T18:27:06.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:05 vm08 bash[17774]: cephadm 2026-03-09T18:27:04.318386+0000 mgr.y (mgr.24335) 212 : cephadm [INF] Deploying daemon mgr.x on vm08 2026-03-09T18:27:06.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:05 vm08 bash[17774]: cluster 2026-03-09T18:27:04.486134+0000 mgr.y (mgr.24335) 213 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:07 vm00 bash[22468]: cluster 2026-03-09T18:27:06.486505+0000 mgr.y (mgr.24335) 214 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:07.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:07 vm00 bash[17468]: cluster 2026-03-09T18:27:06.486505+0000 mgr.y (mgr.24335) 214 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:07.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:07 vm08 bash[17774]: cluster 2026-03-09T18:27:06.486505+0000 mgr.y (mgr.24335) 214 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:08.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:07 vm08 bash[18535]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:27:07] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:27:09.585 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:09 vm00 bash[17468]: cluster 2026-03-09T18:27:08.486908+0000 mgr.y (mgr.24335) 215 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:09.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:09 vm00 bash[22468]: cluster 2026-03-09T18:27:08.486908+0000 mgr.y (mgr.24335) 215 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:09.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:09 vm08 bash[17774]: cluster 2026-03-09T18:27:08.486908+0000 mgr.y (mgr.24335) 215 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:11.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:11 vm00 bash[22468]: cluster 2026-03-09T18:27:10.487435+0000 mgr.y (mgr.24335) 216 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:11.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:11 vm00 bash[17468]: cluster 2026-03-09T18:27:10.487435+0000 mgr.y (mgr.24335) 216 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:11.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:11 vm08 bash[17774]: cluster 2026-03-09T18:27:10.487435+0000 mgr.y (mgr.24335) 216 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:12.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:12 vm08 bash[17774]: audit 2026-03-09T18:27:11.448462+0000 mgr.y (mgr.24335) 217 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:27:13.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:12 vm00 bash[22468]: audit 2026-03-09T18:27:11.448462+0000 mgr.y (mgr.24335) 217 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:27:13.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:12 vm00 bash[17468]: audit 2026-03-09T18:27:11.448462+0000 mgr.y (mgr.24335) 217 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:27:13.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:13 vm00 bash[22468]: cluster 2026-03-09T18:27:12.487721+0000 mgr.y (mgr.24335) 218 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:13.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:13 vm00 bash[17468]: cluster 2026-03-09T18:27:12.487721+0000 mgr.y (mgr.24335) 218 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:13.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:27:13 vm00 bash[42815]: level=error ts=2026-03-09T18:27:13.519Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:27:13.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:27:13 vm00 bash[42815]: level=warn ts=2026-03-09T18:27:13.521Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:27:13.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:27:13 vm00 bash[42815]: level=warn ts=2026-03-09T18:27:13.521Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:27:13.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:13 vm08 bash[17774]: cluster 2026-03-09T18:27:12.487721+0000 mgr.y (mgr.24335) 218 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:15.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:15 vm08 bash[17774]: cluster 2026-03-09T18:27:14.488147+0000 mgr.y (mgr.24335) 219 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:16.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:15 vm00 bash[22468]: cluster 2026-03-09T18:27:14.488147+0000 mgr.y (mgr.24335) 219 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:16.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:15 vm00 bash[17468]: cluster 2026-03-09T18:27:14.488147+0000 mgr.y (mgr.24335) 219 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:16.131 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:27:15 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:27:15] "GET /metrics HTTP/1.1" 200 207602 "" "Prometheus/2.33.4" 2026-03-09T18:27:17.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:17 vm00 bash[22468]: cluster 2026-03-09T18:27:16.488407+0000 mgr.y (mgr.24335) 220 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:17.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:17 vm00 bash[17468]: cluster 2026-03-09T18:27:16.488407+0000 mgr.y (mgr.24335) 220 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:17.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:17 vm08 bash[17774]: cluster 2026-03-09T18:27:16.488407+0000 mgr.y (mgr.24335) 220 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:18.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:17 vm08 bash[18535]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:27:17] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:27:19.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:19 vm00 bash[22468]: cluster 2026-03-09T18:27:18.488680+0000 mgr.y (mgr.24335) 221 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:19.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:19 vm00 bash[17468]: cluster 2026-03-09T18:27:18.488680+0000 mgr.y (mgr.24335) 221 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:19.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:19 vm08 bash[17774]: cluster 2026-03-09T18:27:18.488680+0000 mgr.y (mgr.24335) 221 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:21.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:21 vm00 bash[22468]: cluster 2026-03-09T18:27:20.489304+0000 mgr.y (mgr.24335) 222 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:21.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:21 vm00 bash[17468]: cluster 2026-03-09T18:27:20.489304+0000 mgr.y (mgr.24335) 222 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:21.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:21 vm08 bash[17774]: cluster 2026-03-09T18:27:20.489304+0000 mgr.y (mgr.24335) 222 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:22.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:22 vm00 bash[22468]: audit 2026-03-09T18:27:21.457546+0000 mgr.y (mgr.24335) 223 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:27:22.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:22 vm00 bash[17468]: audit 2026-03-09T18:27:21.457546+0000 mgr.y (mgr.24335) 223 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:27:22.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:22 vm08 bash[17774]: audit 2026-03-09T18:27:21.457546+0000 mgr.y (mgr.24335) 223 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:27:23.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:23 vm00 bash[22468]: cluster 2026-03-09T18:27:22.489686+0000 mgr.y (mgr.24335) 224 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:23.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:23 vm00 bash[17468]: cluster 2026-03-09T18:27:22.489686+0000 mgr.y (mgr.24335) 224 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:23.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:27:23 vm00 bash[42815]: level=error ts=2026-03-09T18:27:23.520Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:27:23.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:27:23 vm00 bash[42815]: level=warn ts=2026-03-09T18:27:23.522Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:27:23.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:27:23 vm00 bash[42815]: level=warn ts=2026-03-09T18:27:23.522Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:27:23.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:23 vm08 bash[17774]: cluster 2026-03-09T18:27:22.489686+0000 mgr.y (mgr.24335) 224 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:25 vm00 bash[22468]: cluster 2026-03-09T18:27:24.490276+0000 mgr.y (mgr.24335) 225 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:25.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:25 vm00 bash[17468]: cluster 2026-03-09T18:27:24.490276+0000 mgr.y (mgr.24335) 225 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:25.882 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:27:25 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:27:25] "GET /metrics HTTP/1.1" 200 207602 "" "Prometheus/2.33.4" 2026-03-09T18:27:25.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:25 vm08 bash[17774]: cluster 2026-03-09T18:27:24.490276+0000 mgr.y (mgr.24335) 225 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:27.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:27 vm00 bash[22468]: cluster 2026-03-09T18:27:26.490580+0000 mgr.y (mgr.24335) 226 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:27.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:27 vm00 bash[17468]: cluster 2026-03-09T18:27:26.490580+0000 mgr.y (mgr.24335) 226 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:27.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:27 vm08 bash[17774]: cluster 2026-03-09T18:27:26.490580+0000 mgr.y (mgr.24335) 226 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:28.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:27 vm08 bash[18535]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:27:27] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:27:28.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:28 vm08 bash[17774]: audit 2026-03-09T18:27:28.529089+0000 mon.c (mon.1) 119 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:27:28.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:28 vm08 bash[17774]: audit 2026-03-09T18:27:28.529509+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:27:28.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:28 vm08 bash[17774]: audit 2026-03-09T18:27:28.541043+0000 mon.c (mon.1) 120 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:27:28.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:28 vm08 bash[17774]: audit 2026-03-09T18:27:28.541488+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:27:29.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:28 vm00 bash[22468]: audit 2026-03-09T18:27:28.529089+0000 mon.c (mon.1) 119 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:27:29.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:28 vm00 bash[22468]: audit 2026-03-09T18:27:28.529509+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:27:29.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:28 vm00 bash[22468]: audit 2026-03-09T18:27:28.541043+0000 mon.c (mon.1) 120 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:27:29.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:28 vm00 bash[22468]: audit 2026-03-09T18:27:28.541488+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:27:29.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:28 vm00 bash[17468]: audit 2026-03-09T18:27:28.529089+0000 mon.c (mon.1) 119 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:27:29.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:28 vm00 bash[17468]: audit 2026-03-09T18:27:28.529509+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:27:29.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:28 vm00 bash[17468]: audit 2026-03-09T18:27:28.541043+0000 mon.c (mon.1) 120 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:27:29.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:28 vm00 bash[17468]: audit 2026-03-09T18:27:28.541488+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:27:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:29 vm00 bash[22468]: cluster 2026-03-09T18:27:28.490826+0000 mgr.y (mgr.24335) 227 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:29 vm00 bash[17468]: cluster 2026-03-09T18:27:28.490826+0000 mgr.y (mgr.24335) 227 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:30.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:29 vm08 bash[17774]: cluster 2026-03-09T18:27:28.490826+0000 mgr.y (mgr.24335) 227 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:32.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:32 vm00 bash[22468]: cluster 2026-03-09T18:27:30.491364+0000 mgr.y (mgr.24335) 228 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:32.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:32 vm00 bash[22468]: audit 2026-03-09T18:27:31.466914+0000 mgr.y (mgr.24335) 229 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:27:32.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:32 vm00 bash[17468]: cluster 2026-03-09T18:27:30.491364+0000 mgr.y (mgr.24335) 228 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:32.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:32 vm00 bash[17468]: audit 2026-03-09T18:27:31.466914+0000 mgr.y (mgr.24335) 229 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:27:32.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:32 vm08 bash[17774]: cluster 2026-03-09T18:27:30.491364+0000 mgr.y (mgr.24335) 228 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:32.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:32 vm08 bash[17774]: audit 2026-03-09T18:27:31.466914+0000 mgr.y (mgr.24335) 229 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:27:33.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:33 vm00 bash[22468]: cluster 2026-03-09T18:27:32.491724+0000 mgr.y (mgr.24335) 230 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:33.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:33 vm00 bash[17468]: cluster 2026-03-09T18:27:32.491724+0000 mgr.y (mgr.24335) 230 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:33.631 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:27:33 vm00 bash[42815]: level=error ts=2026-03-09T18:27:33.521Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:27:33.632 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:27:33 vm00 bash[42815]: level=warn ts=2026-03-09T18:27:33.523Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:27:33.632 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:27:33 vm00 bash[42815]: level=warn ts=2026-03-09T18:27:33.524Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:27:33.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:33 vm08 bash[17774]: cluster 2026-03-09T18:27:32.491724+0000 mgr.y (mgr.24335) 230 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:36.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:35 vm00 bash[22468]: cluster 2026-03-09T18:27:34.492209+0000 mgr.y (mgr.24335) 231 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:36.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:35 vm00 bash[17468]: cluster 2026-03-09T18:27:34.492209+0000 mgr.y (mgr.24335) 231 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:36.132 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:27:35 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:27:35] "GET /metrics HTTP/1.1" 200 207613 "" "Prometheus/2.33.4" 2026-03-09T18:27:36.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:35 vm08 bash[17774]: cluster 2026-03-09T18:27:34.492209+0000 mgr.y (mgr.24335) 231 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:37.838 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:37 vm08 bash[17774]: cluster 2026-03-09T18:27:36.492567+0000 mgr.y (mgr.24335) 232 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:37.838 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:37 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:27:37.839 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:37 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:27:37.839 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:27:37 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:27:37.839 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:27:37 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:27:37.839 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:27:37 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:27:37.839 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:27:37 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:27:37.839 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:27:37 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:27:37.839 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:27:37 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:27:37.839 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:27:37 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:27:37.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:37 vm00 bash[22468]: cluster 2026-03-09T18:27:36.492567+0000 mgr.y (mgr.24335) 232 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:37.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:37 vm00 bash[17468]: cluster 2026-03-09T18:27:36.492567+0000 mgr.y (mgr.24335) 232 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:38.173 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:37 vm08 systemd[1]: Stopping Ceph mgr.x for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:27:38.173 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:37 vm08 bash[36471]: Error response from daemon: No such container: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-mgr.x 2026-03-09T18:27:38.173 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:38 vm08 bash[36478]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-mgr-x 2026-03-09T18:27:38.173 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:38 vm08 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mgr.x.service: Main process exited, code=exited, status=143/n/a 2026-03-09T18:27:38.173 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:38 vm08 bash[36513]: Error response from daemon: No such container: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-mgr.x 2026-03-09T18:27:38.173 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:38 vm08 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mgr.x.service: Failed with result 'exit-code'. 2026-03-09T18:27:38.173 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:38 vm08 systemd[1]: Stopped Ceph mgr.x for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:27:38.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:38 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:27:38.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:38 vm08 systemd[1]: Started Ceph mgr.x for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:27:38.475 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:27:38 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:27:38.475 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:27:38 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:27:38.475 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:27:38 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:27:38.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:38 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:27:38.475 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:27:38 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:27:38.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:27:38 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:27:38.475 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:27:38 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:27:38.475 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:27:38 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:27:38.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:27:38 vm00 bash[42815]: level=warn ts=2026-03-09T18:27:38.437Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=5 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": dial tcp 192.168.123.108:8443: connect: connection refused" 2026-03-09T18:27:38.974 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:38 vm08 bash[36576]: debug 2026-03-09T18:27:38.513+0000 7f7391a94140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T18:27:38.975 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:38 vm08 bash[36576]: debug 2026-03-09T18:27:38.553+0000 7f7391a94140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T18:27:38.975 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:38 vm08 bash[36576]: debug 2026-03-09T18:27:38.681+0000 7f7391a94140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T18:27:39.290 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:39 vm08 bash[36576]: debug 2026-03-09T18:27:39.001+0000 7f7391a94140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T18:27:39.609 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:39 vm08 bash[17774]: audit 2026-03-09T18:27:38.289005+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:27:39.609 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:39 vm08 bash[17774]: audit 2026-03-09T18:27:38.300267+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:27:39.609 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:39 vm08 bash[17774]: audit 2026-03-09T18:27:38.302352+0000 mon.c (mon.1) 121 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:39.609 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:39 vm08 bash[17774]: audit 2026-03-09T18:27:38.303480+0000 mon.c (mon.1) 122 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:39.609 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:39 vm08 bash[17774]: audit 2026-03-09T18:27:38.304232+0000 mon.c (mon.1) 123 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:39.609 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:39 vm08 bash[17774]: cluster 2026-03-09T18:27:38.492994+0000 mgr.y (mgr.24335) 233 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:39.609 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:39 vm08 bash[36576]: debug 2026-03-09T18:27:39.509+0000 7f7391a94140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T18:27:39.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:39 vm00 bash[22468]: audit 2026-03-09T18:27:38.289005+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:27:39.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:39 vm00 bash[22468]: audit 2026-03-09T18:27:38.300267+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:27:39.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:39 vm00 bash[22468]: audit 2026-03-09T18:27:38.302352+0000 mon.c (mon.1) 121 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:39.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:39 vm00 bash[22468]: audit 2026-03-09T18:27:38.303480+0000 mon.c (mon.1) 122 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:39.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:39 vm00 bash[22468]: audit 2026-03-09T18:27:38.304232+0000 mon.c (mon.1) 123 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:39.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:39 vm00 bash[22468]: cluster 2026-03-09T18:27:38.492994+0000 mgr.y (mgr.24335) 233 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:39.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:39 vm00 bash[17468]: audit 2026-03-09T18:27:38.289005+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:27:39.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:39 vm00 bash[17468]: audit 2026-03-09T18:27:38.300267+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:27:39.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:39 vm00 bash[17468]: audit 2026-03-09T18:27:38.302352+0000 mon.c (mon.1) 121 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:27:39.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:39 vm00 bash[17468]: audit 2026-03-09T18:27:38.303480+0000 mon.c (mon.1) 122 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:27:39.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:39 vm00 bash[17468]: audit 2026-03-09T18:27:38.304232+0000 mon.c (mon.1) 123 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:27:39.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:39 vm00 bash[17468]: cluster 2026-03-09T18:27:38.492994+0000 mgr.y (mgr.24335) 233 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:39.892 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:39 vm08 bash[36576]: debug 2026-03-09T18:27:39.605+0000 7f7391a94140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T18:27:39.892 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:39 vm08 bash[36576]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T18:27:39.892 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:39 vm08 bash[36576]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T18:27:39.892 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:39 vm08 bash[36576]: from numpy import show_config as show_numpy_config 2026-03-09T18:27:39.892 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:39 vm08 bash[36576]: debug 2026-03-09T18:27:39.737+0000 7f7391a94140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T18:27:40.224 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:39 vm08 bash[36576]: debug 2026-03-09T18:27:39.885+0000 7f7391a94140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T18:27:40.224 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:39 vm08 bash[36576]: debug 2026-03-09T18:27:39.929+0000 7f7391a94140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T18:27:40.224 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:39 vm08 bash[36576]: debug 2026-03-09T18:27:39.969+0000 7f7391a94140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T18:27:40.224 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:40 vm08 bash[36576]: debug 2026-03-09T18:27:40.017+0000 7f7391a94140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T18:27:40.224 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:40 vm08 bash[36576]: debug 2026-03-09T18:27:40.073+0000 7f7391a94140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T18:27:40.830 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:40 vm08 bash[36576]: debug 2026-03-09T18:27:40.541+0000 7f7391a94140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T18:27:40.830 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:40 vm08 bash[36576]: debug 2026-03-09T18:27:40.581+0000 7f7391a94140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T18:27:40.830 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:40 vm08 bash[36576]: debug 2026-03-09T18:27:40.621+0000 7f7391a94140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T18:27:40.830 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:40 vm08 bash[36576]: debug 2026-03-09T18:27:40.777+0000 7f7391a94140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T18:27:41.190 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:40 vm08 bash[36576]: debug 2026-03-09T18:27:40.825+0000 7f7391a94140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T18:27:41.191 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:40 vm08 bash[36576]: debug 2026-03-09T18:27:40.869+0000 7f7391a94140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T18:27:41.191 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:40 vm08 bash[36576]: debug 2026-03-09T18:27:40.993+0000 7f7391a94140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:27:41.191 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:41 vm08 bash[36576]: debug 2026-03-09T18:27:41.185+0000 7f7391a94140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T18:27:41.440 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:41 vm08 bash[36576]: debug 2026-03-09T18:27:41.393+0000 7f7391a94140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T18:27:41.708 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:41 vm08 bash[17774]: cluster 2026-03-09T18:27:40.493492+0000 mgr.y (mgr.24335) 234 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:41.708 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:41 vm08 bash[36576]: debug 2026-03-09T18:27:41.437+0000 7f7391a94140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T18:27:41.708 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:41 vm08 bash[36576]: debug 2026-03-09T18:27:41.485+0000 7f7391a94140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T18:27:41.708 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:41 vm08 bash[36576]: debug 2026-03-09T18:27:41.661+0000 7f7391a94140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:27:41.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:41 vm00 bash[22468]: cluster 2026-03-09T18:27:40.493492+0000 mgr.y (mgr.24335) 234 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:41.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:41 vm00 bash[17468]: cluster 2026-03-09T18:27:40.493492+0000 mgr.y (mgr.24335) 234 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:41.974 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:41 vm08 bash[36576]: debug 2026-03-09T18:27:41.909+0000 7f7391a94140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T18:27:41.974 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:41 vm08 bash[36576]: [09/Mar/2026:18:27:41] ENGINE Bus STARTING 2026-03-09T18:27:41.974 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:41 vm08 bash[36576]: CherryPy Checker: 2026-03-09T18:27:41.974 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:41 vm08 bash[36576]: The Application mounted at '' has an empty config. 2026-03-09T18:27:42.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:42 vm08 bash[36576]: [09/Mar/2026:18:27:42] ENGINE Serving on http://:::9283 2026-03-09T18:27:42.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:42 vm08 bash[36576]: [09/Mar/2026:18:27:42] ENGINE Bus STARTED 2026-03-09T18:27:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:42 vm00 bash[22468]: audit 2026-03-09T18:27:41.477385+0000 mgr.y (mgr.24335) 235 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:27:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:42 vm00 bash[22468]: audit 2026-03-09T18:27:41.725964+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:27:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:42 vm00 bash[22468]: audit 2026-03-09T18:27:41.733475+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:27:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:42 vm00 bash[22468]: cluster 2026-03-09T18:27:41.915630+0000 mon.a (mon.0) 745 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T18:27:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:42 vm00 bash[22468]: cluster 2026-03-09T18:27:41.915727+0000 mon.a (mon.0) 746 : cluster [DBG] Standby manager daemon x started 2026-03-09T18:27:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:42 vm00 bash[22468]: audit 2026-03-09T18:27:41.917366+0000 mon.c (mon.1) 124 : audit [DBG] from='mgr.? 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:27:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:42 vm00 bash[22468]: audit 2026-03-09T18:27:41.917885+0000 mon.c (mon.1) 125 : audit [DBG] from='mgr.? 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:27:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:42 vm00 bash[22468]: audit 2026-03-09T18:27:41.919371+0000 mon.c (mon.1) 126 : audit [DBG] from='mgr.? 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:27:43.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:42 vm00 bash[22468]: audit 2026-03-09T18:27:41.919737+0000 mon.c (mon.1) 127 : audit [DBG] from='mgr.? 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:27:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:42 vm00 bash[17468]: audit 2026-03-09T18:27:41.477385+0000 mgr.y (mgr.24335) 235 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:27:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:42 vm00 bash[17468]: audit 2026-03-09T18:27:41.725964+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:27:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:42 vm00 bash[17468]: audit 2026-03-09T18:27:41.733475+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:27:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:42 vm00 bash[17468]: cluster 2026-03-09T18:27:41.915630+0000 mon.a (mon.0) 745 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T18:27:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:42 vm00 bash[17468]: cluster 2026-03-09T18:27:41.915727+0000 mon.a (mon.0) 746 : cluster [DBG] Standby manager daemon x started 2026-03-09T18:27:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:42 vm00 bash[17468]: audit 2026-03-09T18:27:41.917366+0000 mon.c (mon.1) 124 : audit [DBG] from='mgr.? 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:27:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:42 vm00 bash[17468]: audit 2026-03-09T18:27:41.917885+0000 mon.c (mon.1) 125 : audit [DBG] from='mgr.? 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:27:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:42 vm00 bash[17468]: audit 2026-03-09T18:27:41.919371+0000 mon.c (mon.1) 126 : audit [DBG] from='mgr.? 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:27:43.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:42 vm00 bash[17468]: audit 2026-03-09T18:27:41.919737+0000 mon.c (mon.1) 127 : audit [DBG] from='mgr.? 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:27:43.131 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:27:42 vm00 bash[42815]: level=warn ts=2026-03-09T18:27:42.870Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=7 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:27:43.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:42 vm08 bash[17774]: audit 2026-03-09T18:27:41.477385+0000 mgr.y (mgr.24335) 235 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:27:43.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:42 vm08 bash[17774]: audit 2026-03-09T18:27:41.725964+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:27:43.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:42 vm08 bash[17774]: audit 2026-03-09T18:27:41.733475+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:27:43.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:42 vm08 bash[17774]: cluster 2026-03-09T18:27:41.915630+0000 mon.a (mon.0) 745 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T18:27:43.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:42 vm08 bash[17774]: cluster 2026-03-09T18:27:41.915727+0000 mon.a (mon.0) 746 : cluster [DBG] Standby manager daemon x started 2026-03-09T18:27:43.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:42 vm08 bash[17774]: audit 2026-03-09T18:27:41.917366+0000 mon.c (mon.1) 124 : audit [DBG] from='mgr.? 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:27:43.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:42 vm08 bash[17774]: audit 2026-03-09T18:27:41.917885+0000 mon.c (mon.1) 125 : audit [DBG] from='mgr.? 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:27:43.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:42 vm08 bash[17774]: audit 2026-03-09T18:27:41.919371+0000 mon.c (mon.1) 126 : audit [DBG] from='mgr.? 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:27:43.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:42 vm08 bash[17774]: audit 2026-03-09T18:27:41.919737+0000 mon.c (mon.1) 127 : audit [DBG] from='mgr.? 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:27:43.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:43 vm00 bash[22468]: cluster 2026-03-09T18:27:42.493873+0000 mgr.y (mgr.24335) 236 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:43.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:43 vm00 bash[22468]: cluster 2026-03-09T18:27:42.757213+0000 mon.a (mon.0) 747 : cluster [DBG] mgrmap e21: y(active, since 5m), standbys: x 2026-03-09T18:27:43.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:43 vm00 bash[17468]: cluster 2026-03-09T18:27:42.493873+0000 mgr.y (mgr.24335) 236 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:43.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:43 vm00 bash[17468]: cluster 2026-03-09T18:27:42.757213+0000 mon.a (mon.0) 747 : cluster [DBG] mgrmap e21: y(active, since 5m), standbys: x 2026-03-09T18:27:43.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:27:43 vm00 bash[42815]: level=error ts=2026-03-09T18:27:43.521Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:27:43.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:27:43 vm00 bash[42815]: level=warn ts=2026-03-09T18:27:43.522Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:27:43.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:27:43 vm00 bash[42815]: level=warn ts=2026-03-09T18:27:43.524Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:27:44.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:43 vm08 bash[17774]: cluster 2026-03-09T18:27:42.493873+0000 mgr.y (mgr.24335) 236 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:44.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:43 vm08 bash[17774]: cluster 2026-03-09T18:27:42.757213+0000 mon.a (mon.0) 747 : cluster [DBG] mgrmap e21: y(active, since 5m), standbys: x 2026-03-09T18:27:46.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:45 vm00 bash[22468]: cluster 2026-03-09T18:27:44.494414+0000 mgr.y (mgr.24335) 237 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:46.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:45 vm00 bash[17468]: cluster 2026-03-09T18:27:44.494414+0000 mgr.y (mgr.24335) 237 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:46.131 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:27:45 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:27:45] "GET /metrics HTTP/1.1" 200 207619 "" "Prometheus/2.33.4" 2026-03-09T18:27:46.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:45 vm08 bash[17774]: cluster 2026-03-09T18:27:44.494414+0000 mgr.y (mgr.24335) 237 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:47.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:47 vm00 bash[22468]: cluster 2026-03-09T18:27:46.494731+0000 mgr.y (mgr.24335) 238 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:47.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:47 vm00 bash[17468]: cluster 2026-03-09T18:27:46.494731+0000 mgr.y (mgr.24335) 238 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:47 vm08 bash[17774]: cluster 2026-03-09T18:27:46.494731+0000 mgr.y (mgr.24335) 238 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:48.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:48 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:27:47] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:27:49.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:49 vm00 bash[22468]: cluster 2026-03-09T18:27:48.495026+0000 mgr.y (mgr.24335) 239 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:49.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:49 vm00 bash[17468]: cluster 2026-03-09T18:27:48.495026+0000 mgr.y (mgr.24335) 239 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:49.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:49 vm08 bash[17774]: cluster 2026-03-09T18:27:48.495026+0000 mgr.y (mgr.24335) 239 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:51.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:51 vm00 bash[22468]: cluster 2026-03-09T18:27:50.495566+0000 mgr.y (mgr.24335) 240 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:51.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:51 vm00 bash[17468]: cluster 2026-03-09T18:27:50.495566+0000 mgr.y (mgr.24335) 240 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:51.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:51 vm08 bash[17774]: cluster 2026-03-09T18:27:50.495566+0000 mgr.y (mgr.24335) 240 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:52.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:52 vm00 bash[17468]: audit 2026-03-09T18:27:51.487087+0000 mgr.y (mgr.24335) 241 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:27:52.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:52 vm00 bash[22468]: audit 2026-03-09T18:27:51.487087+0000 mgr.y (mgr.24335) 241 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:27:52.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:52 vm08 bash[17774]: audit 2026-03-09T18:27:51.487087+0000 mgr.y (mgr.24335) 241 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:27:53.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:53 vm00 bash[17468]: cluster 2026-03-09T18:27:52.495883+0000 mgr.y (mgr.24335) 242 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:53.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:53 vm00 bash[22468]: cluster 2026-03-09T18:27:52.495883+0000 mgr.y (mgr.24335) 242 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:53.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:27:53 vm00 bash[42815]: level=error ts=2026-03-09T18:27:53.521Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:27:53.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:27:53 vm00 bash[42815]: level=warn ts=2026-03-09T18:27:53.522Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:27:53.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:27:53 vm00 bash[42815]: level=warn ts=2026-03-09T18:27:53.523Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:27:53.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:53 vm08 bash[17774]: cluster 2026-03-09T18:27:52.495883+0000 mgr.y (mgr.24335) 242 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:55.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:55 vm00 bash[17468]: cluster 2026-03-09T18:27:54.496517+0000 mgr.y (mgr.24335) 243 : cluster [DBG] pgmap v198: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:55.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:55 vm00 bash[22468]: cluster 2026-03-09T18:27:54.496517+0000 mgr.y (mgr.24335) 243 : cluster [DBG] pgmap v198: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:55.881 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:27:55 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:27:55] "GET /metrics HTTP/1.1" 200 207619 "" "Prometheus/2.33.4" 2026-03-09T18:27:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:55 vm08 bash[17774]: cluster 2026-03-09T18:27:54.496517+0000 mgr.y (mgr.24335) 243 : cluster [DBG] pgmap v198: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:27:57.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:57 vm00 bash[17468]: cluster 2026-03-09T18:27:56.496851+0000 mgr.y (mgr.24335) 244 : cluster [DBG] pgmap v199: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:57.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:57 vm00 bash[22468]: cluster 2026-03-09T18:27:56.496851+0000 mgr.y (mgr.24335) 244 : cluster [DBG] pgmap v199: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:57.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:57 vm08 bash[17774]: cluster 2026-03-09T18:27:56.496851+0000 mgr.y (mgr.24335) 244 : cluster [DBG] pgmap v199: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:58.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:27:57 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:27:57] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:27:59.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:27:59 vm00 bash[22468]: cluster 2026-03-09T18:27:58.497205+0000 mgr.y (mgr.24335) 245 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:59.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:27:59 vm00 bash[17468]: cluster 2026-03-09T18:27:58.497205+0000 mgr.y (mgr.24335) 245 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:27:59.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:27:59 vm08 bash[17774]: cluster 2026-03-09T18:27:58.497205+0000 mgr.y (mgr.24335) 245 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:01.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:01 vm00 bash[17468]: cluster 2026-03-09T18:28:00.497782+0000 mgr.y (mgr.24335) 246 : cluster [DBG] pgmap v201: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:01 vm00 bash[22468]: cluster 2026-03-09T18:28:00.497782+0000 mgr.y (mgr.24335) 246 : cluster [DBG] pgmap v201: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:01.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:01 vm08 bash[17774]: cluster 2026-03-09T18:28:00.497782+0000 mgr.y (mgr.24335) 246 : cluster [DBG] pgmap v201: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:02.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:02 vm00 bash[17468]: audit 2026-03-09T18:28:01.496266+0000 mgr.y (mgr.24335) 247 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:28:02.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:02 vm00 bash[22468]: audit 2026-03-09T18:28:01.496266+0000 mgr.y (mgr.24335) 247 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:28:02.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:02 vm08 bash[17774]: audit 2026-03-09T18:28:01.496266+0000 mgr.y (mgr.24335) 247 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:28:03.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:03 vm00 bash[17468]: cluster 2026-03-09T18:28:02.498166+0000 mgr.y (mgr.24335) 248 : cluster [DBG] pgmap v202: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:03.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:28:03 vm00 bash[42815]: level=error ts=2026-03-09T18:28:03.522Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:28:03.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:28:03 vm00 bash[42815]: level=warn ts=2026-03-09T18:28:03.524Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:28:03.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:28:03 vm00 bash[42815]: level=warn ts=2026-03-09T18:28:03.525Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:28:03.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:03 vm00 bash[22468]: cluster 2026-03-09T18:28:02.498166+0000 mgr.y (mgr.24335) 248 : cluster [DBG] pgmap v202: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:03.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:03 vm08 bash[17774]: cluster 2026-03-09T18:28:02.498166+0000 mgr.y (mgr.24335) 248 : cluster [DBG] pgmap v202: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:05.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:05 vm00 bash[17468]: cluster 2026-03-09T18:28:04.498705+0000 mgr.y (mgr.24335) 249 : cluster [DBG] pgmap v203: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:05.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:05 vm00 bash[22468]: cluster 2026-03-09T18:28:04.498705+0000 mgr.y (mgr.24335) 249 : cluster [DBG] pgmap v203: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:05.881 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:28:05 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:28:05] "GET /metrics HTTP/1.1" 200 207618 "" "Prometheus/2.33.4" 2026-03-09T18:28:05.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:05 vm08 bash[17774]: cluster 2026-03-09T18:28:04.498705+0000 mgr.y (mgr.24335) 249 : cluster [DBG] pgmap v203: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:07.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:07 vm00 bash[22468]: cluster 2026-03-09T18:28:06.499085+0000 mgr.y (mgr.24335) 250 : cluster [DBG] pgmap v204: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:07.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:07 vm00 bash[17468]: cluster 2026-03-09T18:28:06.499085+0000 mgr.y (mgr.24335) 250 : cluster [DBG] pgmap v204: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:07.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:07 vm08 bash[17774]: cluster 2026-03-09T18:28:06.499085+0000 mgr.y (mgr.24335) 250 : cluster [DBG] pgmap v204: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:08.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:28:07 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:28:07] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:28:09.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:09 vm00 bash[17468]: cluster 2026-03-09T18:28:08.499369+0000 mgr.y (mgr.24335) 251 : cluster [DBG] pgmap v205: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:09.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:09 vm00 bash[22468]: cluster 2026-03-09T18:28:08.499369+0000 mgr.y (mgr.24335) 251 : cluster [DBG] pgmap v205: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:09.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:09 vm08 bash[17774]: cluster 2026-03-09T18:28:08.499369+0000 mgr.y (mgr.24335) 251 : cluster [DBG] pgmap v205: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:11.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:11 vm00 bash[17468]: cluster 2026-03-09T18:28:10.500341+0000 mgr.y (mgr.24335) 252 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:11.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:11 vm00 bash[22468]: cluster 2026-03-09T18:28:10.500341+0000 mgr.y (mgr.24335) 252 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:11.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:11 vm08 bash[17774]: cluster 2026-03-09T18:28:10.500341+0000 mgr.y (mgr.24335) 252 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:12.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:12 vm00 bash[17468]: audit 2026-03-09T18:28:11.506288+0000 mgr.y (mgr.24335) 253 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:28:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:12 vm00 bash[22468]: audit 2026-03-09T18:28:11.506288+0000 mgr.y (mgr.24335) 253 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:28:12.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:12 vm08 bash[17774]: audit 2026-03-09T18:28:11.506288+0000 mgr.y (mgr.24335) 253 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:28:13.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:13 vm00 bash[22468]: cluster 2026-03-09T18:28:12.500658+0000 mgr.y (mgr.24335) 254 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:13.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:13 vm00 bash[17468]: cluster 2026-03-09T18:28:12.500658+0000 mgr.y (mgr.24335) 254 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:13.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:28:13 vm00 bash[42815]: level=error ts=2026-03-09T18:28:13.523Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:28:13.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:28:13 vm00 bash[42815]: level=warn ts=2026-03-09T18:28:13.525Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:28:13.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:28:13 vm00 bash[42815]: level=warn ts=2026-03-09T18:28:13.526Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:28:13.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:13 vm08 bash[17774]: cluster 2026-03-09T18:28:12.500658+0000 mgr.y (mgr.24335) 254 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:15.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:15 vm00 bash[17468]: cluster 2026-03-09T18:28:14.501126+0000 mgr.y (mgr.24335) 255 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:15.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:15 vm00 bash[22468]: cluster 2026-03-09T18:28:14.501126+0000 mgr.y (mgr.24335) 255 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:15.881 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:28:15 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:28:15] "GET /metrics HTTP/1.1" 200 207621 "" "Prometheus/2.33.4" 2026-03-09T18:28:15.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:15 vm08 bash[17774]: cluster 2026-03-09T18:28:14.501126+0000 mgr.y (mgr.24335) 255 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:17.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:17 vm00 bash[17468]: cluster 2026-03-09T18:28:16.501452+0000 mgr.y (mgr.24335) 256 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:17.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:17 vm00 bash[22468]: cluster 2026-03-09T18:28:16.501452+0000 mgr.y (mgr.24335) 256 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:17.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:17 vm08 bash[17774]: cluster 2026-03-09T18:28:16.501452+0000 mgr.y (mgr.24335) 256 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:18.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:28:17 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:28:17] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:28:19.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:19 vm00 bash[17468]: cluster 2026-03-09T18:28:18.501725+0000 mgr.y (mgr.24335) 257 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:19.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:19 vm00 bash[22468]: cluster 2026-03-09T18:28:18.501725+0000 mgr.y (mgr.24335) 257 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:19.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:19 vm08 bash[17774]: cluster 2026-03-09T18:28:18.501725+0000 mgr.y (mgr.24335) 257 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:21.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:21 vm00 bash[17468]: cluster 2026-03-09T18:28:20.502180+0000 mgr.y (mgr.24335) 258 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:21.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:21 vm00 bash[22468]: cluster 2026-03-09T18:28:20.502180+0000 mgr.y (mgr.24335) 258 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:21.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:21 vm08 bash[17774]: cluster 2026-03-09T18:28:20.502180+0000 mgr.y (mgr.24335) 258 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:22.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:22 vm00 bash[17468]: audit 2026-03-09T18:28:21.516631+0000 mgr.y (mgr.24335) 259 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:28:22.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:22 vm00 bash[22468]: audit 2026-03-09T18:28:21.516631+0000 mgr.y (mgr.24335) 259 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:28:22.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:22 vm08 bash[17774]: audit 2026-03-09T18:28:21.516631+0000 mgr.y (mgr.24335) 259 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:28:23.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:23 vm00 bash[22468]: cluster 2026-03-09T18:28:22.502530+0000 mgr.y (mgr.24335) 260 : cluster [DBG] pgmap v212: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:23.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:23 vm00 bash[17468]: cluster 2026-03-09T18:28:22.502530+0000 mgr.y (mgr.24335) 260 : cluster [DBG] pgmap v212: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:23.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:28:23 vm00 bash[42815]: level=error ts=2026-03-09T18:28:23.524Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:28:23.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:28:23 vm00 bash[42815]: level=warn ts=2026-03-09T18:28:23.526Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:28:23.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:28:23 vm00 bash[42815]: level=warn ts=2026-03-09T18:28:23.526Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:28:23.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:23 vm08 bash[17774]: cluster 2026-03-09T18:28:22.502530+0000 mgr.y (mgr.24335) 260 : cluster [DBG] pgmap v212: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:25.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:25 vm00 bash[17468]: cluster 2026-03-09T18:28:24.502922+0000 mgr.y (mgr.24335) 261 : cluster [DBG] pgmap v213: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:25.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:25 vm00 bash[22468]: cluster 2026-03-09T18:28:24.502922+0000 mgr.y (mgr.24335) 261 : cluster [DBG] pgmap v213: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:25.881 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:28:25 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:28:25] "GET /metrics HTTP/1.1" 200 207621 "" "Prometheus/2.33.4" 2026-03-09T18:28:25.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:25 vm08 bash[17774]: cluster 2026-03-09T18:28:24.502922+0000 mgr.y (mgr.24335) 261 : cluster [DBG] pgmap v213: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:27.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:27 vm00 bash[17468]: cluster 2026-03-09T18:28:26.503242+0000 mgr.y (mgr.24335) 262 : cluster [DBG] pgmap v214: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:27.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:27 vm00 bash[22468]: cluster 2026-03-09T18:28:26.503242+0000 mgr.y (mgr.24335) 262 : cluster [DBG] pgmap v214: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:27.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:27 vm08 bash[17774]: cluster 2026-03-09T18:28:26.503242+0000 mgr.y (mgr.24335) 262 : cluster [DBG] pgmap v214: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:28.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:28:27 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:28:27] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:28:28.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:28 vm00 bash[17468]: audit 2026-03-09T18:28:28.531695+0000 mon.c (mon.1) 128 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:28:28.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:28 vm00 bash[17468]: audit 2026-03-09T18:28:28.532321+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:28:28.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:28 vm00 bash[17468]: audit 2026-03-09T18:28:28.543854+0000 mon.c (mon.1) 129 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:28:28.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:28 vm00 bash[17468]: audit 2026-03-09T18:28:28.544159+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:28:28.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:28 vm00 bash[22468]: audit 2026-03-09T18:28:28.531695+0000 mon.c (mon.1) 128 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:28:28.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:28 vm00 bash[22468]: audit 2026-03-09T18:28:28.532321+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:28:28.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:28 vm00 bash[22468]: audit 2026-03-09T18:28:28.543854+0000 mon.c (mon.1) 129 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:28:28.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:28 vm00 bash[22468]: audit 2026-03-09T18:28:28.544159+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:28:28.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:28 vm08 bash[17774]: audit 2026-03-09T18:28:28.531695+0000 mon.c (mon.1) 128 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:28:28.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:28 vm08 bash[17774]: audit 2026-03-09T18:28:28.532321+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:28:28.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:28 vm08 bash[17774]: audit 2026-03-09T18:28:28.543854+0000 mon.c (mon.1) 129 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:28:28.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:28 vm08 bash[17774]: audit 2026-03-09T18:28:28.544159+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24335 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:28:29.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:29 vm00 bash[17468]: cluster 2026-03-09T18:28:28.503504+0000 mgr.y (mgr.24335) 263 : cluster [DBG] pgmap v215: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:29.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:29 vm00 bash[22468]: cluster 2026-03-09T18:28:28.503504+0000 mgr.y (mgr.24335) 263 : cluster [DBG] pgmap v215: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:29.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:29 vm08 bash[17774]: cluster 2026-03-09T18:28:28.503504+0000 mgr.y (mgr.24335) 263 : cluster [DBG] pgmap v215: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:31.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:31 vm00 bash[17468]: cluster 2026-03-09T18:28:30.504183+0000 mgr.y (mgr.24335) 264 : cluster [DBG] pgmap v216: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:31.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:31 vm00 bash[22468]: cluster 2026-03-09T18:28:30.504183+0000 mgr.y (mgr.24335) 264 : cluster [DBG] pgmap v216: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:31 vm08 bash[17774]: cluster 2026-03-09T18:28:30.504183+0000 mgr.y (mgr.24335) 264 : cluster [DBG] pgmap v216: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:32.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:32 vm00 bash[17468]: audit 2026-03-09T18:28:31.526780+0000 mgr.y (mgr.24335) 265 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:28:32.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:32 vm00 bash[22468]: audit 2026-03-09T18:28:31.526780+0000 mgr.y (mgr.24335) 265 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:28:32.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:32 vm08 bash[17774]: audit 2026-03-09T18:28:31.526780+0000 mgr.y (mgr.24335) 265 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:28:33.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:33 vm08 bash[17774]: cluster 2026-03-09T18:28:32.504486+0000 mgr.y (mgr.24335) 266 : cluster [DBG] pgmap v217: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:33.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:33 vm00 bash[17468]: cluster 2026-03-09T18:28:32.504486+0000 mgr.y (mgr.24335) 266 : cluster [DBG] pgmap v217: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:33 vm00 bash[22468]: cluster 2026-03-09T18:28:32.504486+0000 mgr.y (mgr.24335) 266 : cluster [DBG] pgmap v217: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:33.880 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:28:33 vm00 bash[42815]: level=error ts=2026-03-09T18:28:33.525Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:28:33.880 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:28:33 vm00 bash[42815]: level=warn ts=2026-03-09T18:28:33.527Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:28:33.880 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:28:33 vm00 bash[42815]: level=warn ts=2026-03-09T18:28:33.527Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:28:35.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:35 vm00 bash[17468]: cluster 2026-03-09T18:28:34.505066+0000 mgr.y (mgr.24335) 267 : cluster [DBG] pgmap v218: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:35 vm00 bash[22468]: cluster 2026-03-09T18:28:34.505066+0000 mgr.y (mgr.24335) 267 : cluster [DBG] pgmap v218: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:35.881 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:28:35 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:28:35] "GET /metrics HTTP/1.1" 200 207635 "" "Prometheus/2.33.4" 2026-03-09T18:28:35.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:35 vm08 bash[17774]: cluster 2026-03-09T18:28:34.505066+0000 mgr.y (mgr.24335) 267 : cluster [DBG] pgmap v218: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:37.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:37 vm00 bash[17468]: cluster 2026-03-09T18:28:36.505359+0000 mgr.y (mgr.24335) 268 : cluster [DBG] pgmap v219: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:37.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:37 vm00 bash[22468]: cluster 2026-03-09T18:28:36.505359+0000 mgr.y (mgr.24335) 268 : cluster [DBG] pgmap v219: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:37.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:37 vm08 bash[17774]: cluster 2026-03-09T18:28:36.505359+0000 mgr.y (mgr.24335) 268 : cluster [DBG] pgmap v219: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:38.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:28:37 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:28:37] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:28:39.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:39 vm00 bash[17468]: cluster 2026-03-09T18:28:38.505692+0000 mgr.y (mgr.24335) 269 : cluster [DBG] pgmap v220: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:39.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:39 vm00 bash[22468]: cluster 2026-03-09T18:28:38.505692+0000 mgr.y (mgr.24335) 269 : cluster [DBG] pgmap v220: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:39.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:39 vm08 bash[17774]: cluster 2026-03-09T18:28:38.505692+0000 mgr.y (mgr.24335) 269 : cluster [DBG] pgmap v220: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:41.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:41 vm00 bash[17468]: cluster 2026-03-09T18:28:40.506160+0000 mgr.y (mgr.24335) 270 : cluster [DBG] pgmap v221: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:41.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:41 vm00 bash[22468]: cluster 2026-03-09T18:28:40.506160+0000 mgr.y (mgr.24335) 270 : cluster [DBG] pgmap v221: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:41.882 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:41 vm08 bash[17774]: cluster 2026-03-09T18:28:40.506160+0000 mgr.y (mgr.24335) 270 : cluster [DBG] pgmap v221: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:42.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:42 vm00 bash[17468]: audit 2026-03-09T18:28:41.529946+0000 mgr.y (mgr.24335) 271 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:28:42.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:42 vm00 bash[17468]: audit 2026-03-09T18:28:41.737654+0000 mon.c (mon.1) 130 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:42.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:42 vm00 bash[17468]: audit 2026-03-09T18:28:41.738769+0000 mon.c (mon.1) 131 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:42.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:42 vm00 bash[17468]: audit 2026-03-09T18:28:41.739327+0000 mon.c (mon.1) 132 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:42.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:42 vm00 bash[17468]: audit 2026-03-09T18:28:41.898805+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:28:42.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:42 vm00 bash[22468]: audit 2026-03-09T18:28:41.529946+0000 mgr.y (mgr.24335) 271 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:28:42.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:42 vm00 bash[22468]: audit 2026-03-09T18:28:41.737654+0000 mon.c (mon.1) 130 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:42.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:42 vm00 bash[22468]: audit 2026-03-09T18:28:41.738769+0000 mon.c (mon.1) 131 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:42.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:42 vm00 bash[22468]: audit 2026-03-09T18:28:41.739327+0000 mon.c (mon.1) 132 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:42.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:42 vm00 bash[22468]: audit 2026-03-09T18:28:41.898805+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:28:42.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:42 vm08 bash[17774]: audit 2026-03-09T18:28:41.529946+0000 mgr.y (mgr.24335) 271 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:28:42.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:42 vm08 bash[17774]: audit 2026-03-09T18:28:41.737654+0000 mon.c (mon.1) 130 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:28:42.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:42 vm08 bash[17774]: audit 2026-03-09T18:28:41.738769+0000 mon.c (mon.1) 131 : audit [DBG] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:28:42.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:42 vm08 bash[17774]: audit 2026-03-09T18:28:41.739327+0000 mon.c (mon.1) 132 : audit [INF] from='mgr.24335 192.168.123.100:0/2123385786' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:28:42.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:42 vm08 bash[17774]: audit 2026-03-09T18:28:41.898805+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24335 ' entity='mgr.y' 2026-03-09T18:28:43.880 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:28:43 vm00 bash[42815]: level=error ts=2026-03-09T18:28:43.525Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:28:43.880 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:28:43 vm00 bash[42815]: level=warn ts=2026-03-09T18:28:43.527Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:28:43.880 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:28:43 vm00 bash[42815]: level=warn ts=2026-03-09T18:28:43.528Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:28:44.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:43 vm08 bash[17774]: cluster 2026-03-09T18:28:42.506490+0000 mgr.y (mgr.24335) 272 : cluster [DBG] pgmap v222: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:44.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:43 vm00 bash[17468]: cluster 2026-03-09T18:28:42.506490+0000 mgr.y (mgr.24335) 272 : cluster [DBG] pgmap v222: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:44.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:43 vm00 bash[22468]: cluster 2026-03-09T18:28:42.506490+0000 mgr.y (mgr.24335) 272 : cluster [DBG] pgmap v222: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:46.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:45 vm00 bash[22468]: cluster 2026-03-09T18:28:44.506909+0000 mgr.y (mgr.24335) 273 : cluster [DBG] pgmap v223: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:46.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:45 vm00 bash[17468]: cluster 2026-03-09T18:28:44.506909+0000 mgr.y (mgr.24335) 273 : cluster [DBG] pgmap v223: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:46.130 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:28:45 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:28:45] "GET /metrics HTTP/1.1" 200 207625 "" "Prometheus/2.33.4" 2026-03-09T18:28:46.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:45 vm08 bash[17774]: cluster 2026-03-09T18:28:44.506909+0000 mgr.y (mgr.24335) 273 : cluster [DBG] pgmap v223: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:47.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:47 vm00 bash[22468]: cluster 2026-03-09T18:28:46.507172+0000 mgr.y (mgr.24335) 274 : cluster [DBG] pgmap v224: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:47.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:47 vm00 bash[17468]: cluster 2026-03-09T18:28:46.507172+0000 mgr.y (mgr.24335) 274 : cluster [DBG] pgmap v224: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:47 vm08 bash[17774]: cluster 2026-03-09T18:28:46.507172+0000 mgr.y (mgr.24335) 274 : cluster [DBG] pgmap v224: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:48.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:28:47 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:28:47] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:28:49.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:49 vm00 bash[22468]: cluster 2026-03-09T18:28:48.507492+0000 mgr.y (mgr.24335) 275 : cluster [DBG] pgmap v225: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:49.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:49 vm00 bash[17468]: cluster 2026-03-09T18:28:48.507492+0000 mgr.y (mgr.24335) 275 : cluster [DBG] pgmap v225: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:49.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:49 vm08 bash[17774]: cluster 2026-03-09T18:28:48.507492+0000 mgr.y (mgr.24335) 275 : cluster [DBG] pgmap v225: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:51.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:51 vm00 bash[17468]: cluster 2026-03-09T18:28:50.508162+0000 mgr.y (mgr.24335) 276 : cluster [DBG] pgmap v226: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:51.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:51 vm00 bash[22468]: cluster 2026-03-09T18:28:50.508162+0000 mgr.y (mgr.24335) 276 : cluster [DBG] pgmap v226: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:51.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:51 vm08 bash[17774]: cluster 2026-03-09T18:28:50.508162+0000 mgr.y (mgr.24335) 276 : cluster [DBG] pgmap v226: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:52.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:52 vm00 bash[17468]: audit 2026-03-09T18:28:51.537569+0000 mgr.y (mgr.24335) 277 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:28:52.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:52 vm00 bash[22468]: audit 2026-03-09T18:28:51.537569+0000 mgr.y (mgr.24335) 277 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:28:52.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:52 vm08 bash[17774]: audit 2026-03-09T18:28:51.537569+0000 mgr.y (mgr.24335) 277 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:28:53.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:53 vm00 bash[17468]: cluster 2026-03-09T18:28:52.508468+0000 mgr.y (mgr.24335) 278 : cluster [DBG] pgmap v227: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:53.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:53 vm00 bash[22468]: cluster 2026-03-09T18:28:52.508468+0000 mgr.y (mgr.24335) 278 : cluster [DBG] pgmap v227: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:53.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:28:53 vm00 bash[42815]: level=error ts=2026-03-09T18:28:53.526Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:28:53.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:28:53 vm00 bash[42815]: level=warn ts=2026-03-09T18:28:53.528Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:28:53.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:28:53 vm00 bash[42815]: level=warn ts=2026-03-09T18:28:53.528Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:28:53.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:53 vm08 bash[17774]: cluster 2026-03-09T18:28:52.508468+0000 mgr.y (mgr.24335) 278 : cluster [DBG] pgmap v227: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:55.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:55 vm00 bash[22468]: cluster 2026-03-09T18:28:54.509096+0000 mgr.y (mgr.24335) 279 : cluster [DBG] pgmap v228: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:55 vm00 bash[17468]: cluster 2026-03-09T18:28:54.509096+0000 mgr.y (mgr.24335) 279 : cluster [DBG] pgmap v228: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:55.880 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:28:55 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:28:55] "GET /metrics HTTP/1.1" 200 207625 "" "Prometheus/2.33.4" 2026-03-09T18:28:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:55 vm08 bash[17774]: cluster 2026-03-09T18:28:54.509096+0000 mgr.y (mgr.24335) 279 : cluster [DBG] pgmap v228: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:28:57.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:57 vm00 bash[22468]: cluster 2026-03-09T18:28:56.511824+0000 mgr.y (mgr.24335) 280 : cluster [DBG] pgmap v229: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:57.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:57 vm00 bash[17468]: cluster 2026-03-09T18:28:56.511824+0000 mgr.y (mgr.24335) 280 : cluster [DBG] pgmap v229: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:57.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:57 vm08 bash[17774]: cluster 2026-03-09T18:28:56.511824+0000 mgr.y (mgr.24335) 280 : cluster [DBG] pgmap v229: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:28:58.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:28:57 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:28:57] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:28:59.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:28:59 vm00 bash[17468]: cluster 2026-03-09T18:28:58.512037+0000 mgr.y (mgr.24335) 281 : cluster [DBG] pgmap v230: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T18:28:59.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:28:59 vm00 bash[22468]: cluster 2026-03-09T18:28:58.512037+0000 mgr.y (mgr.24335) 281 : cluster [DBG] pgmap v230: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T18:28:59.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:28:59 vm08 bash[17774]: cluster 2026-03-09T18:28:58.512037+0000 mgr.y (mgr.24335) 281 : cluster [DBG] pgmap v230: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T18:29:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:01 vm00 bash[22468]: cluster 2026-03-09T18:29:00.512602+0000 mgr.y (mgr.24335) 282 : cluster [DBG] pgmap v231: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:01.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:01 vm00 bash[17468]: cluster 2026-03-09T18:29:00.512602+0000 mgr.y (mgr.24335) 282 : cluster [DBG] pgmap v231: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:01.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:01 vm08 bash[17774]: cluster 2026-03-09T18:29:00.512602+0000 mgr.y (mgr.24335) 282 : cluster [DBG] pgmap v231: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:02.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:02 vm00 bash[22468]: audit 2026-03-09T18:29:01.544790+0000 mgr.y (mgr.24335) 283 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:02.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:02 vm00 bash[17468]: audit 2026-03-09T18:29:01.544790+0000 mgr.y (mgr.24335) 283 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:02.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:02 vm08 bash[17774]: audit 2026-03-09T18:29:01.544790+0000 mgr.y (mgr.24335) 283 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:03.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:03 vm00 bash[22468]: cluster 2026-03-09T18:29:02.512872+0000 mgr.y (mgr.24335) 284 : cluster [DBG] pgmap v232: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T18:29:03.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:03 vm00 bash[17468]: cluster 2026-03-09T18:29:02.512872+0000 mgr.y (mgr.24335) 284 : cluster [DBG] pgmap v232: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T18:29:03.880 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:03 vm00 bash[42815]: level=error ts=2026-03-09T18:29:03.527Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:29:03.880 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:03 vm00 bash[42815]: level=warn ts=2026-03-09T18:29:03.529Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:29:03.880 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:03 vm00 bash[42815]: level=warn ts=2026-03-09T18:29:03.531Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:29:03.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:03 vm08 bash[17774]: cluster 2026-03-09T18:29:02.512872+0000 mgr.y (mgr.24335) 284 : cluster [DBG] pgmap v232: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T18:29:05.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:05 vm00 bash[22468]: cluster 2026-03-09T18:29:04.513354+0000 mgr.y (mgr.24335) 285 : cluster [DBG] pgmap v233: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:05.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:05 vm00 bash[17468]: cluster 2026-03-09T18:29:04.513354+0000 mgr.y (mgr.24335) 285 : cluster [DBG] pgmap v233: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:05.880 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:05 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:29:05] "GET /metrics HTTP/1.1" 200 207628 "" "Prometheus/2.33.4" 2026-03-09T18:29:05.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:05 vm08 bash[17774]: cluster 2026-03-09T18:29:04.513354+0000 mgr.y (mgr.24335) 285 : cluster [DBG] pgmap v233: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:07 vm00 bash[22468]: cluster 2026-03-09T18:29:06.513749+0000 mgr.y (mgr.24335) 286 : cluster [DBG] pgmap v234: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T18:29:07.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:07 vm00 bash[17468]: cluster 2026-03-09T18:29:06.513749+0000 mgr.y (mgr.24335) 286 : cluster [DBG] pgmap v234: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T18:29:07.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:07 vm08 bash[17774]: cluster 2026-03-09T18:29:06.513749+0000 mgr.y (mgr.24335) 286 : cluster [DBG] pgmap v234: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T18:29:08.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:07 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:29:07] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:29:09.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:09 vm00 bash[17468]: cluster 2026-03-09T18:29:08.514054+0000 mgr.y (mgr.24335) 287 : cluster [DBG] pgmap v235: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:09.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:09 vm00 bash[22468]: cluster 2026-03-09T18:29:08.514054+0000 mgr.y (mgr.24335) 287 : cluster [DBG] pgmap v235: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:09.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:09 vm08 bash[17774]: cluster 2026-03-09T18:29:08.514054+0000 mgr.y (mgr.24335) 287 : cluster [DBG] pgmap v235: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:11.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:11 vm00 bash[17468]: cluster 2026-03-09T18:29:10.514714+0000 mgr.y (mgr.24335) 288 : cluster [DBG] pgmap v236: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:11.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:11 vm00 bash[22468]: cluster 2026-03-09T18:29:10.514714+0000 mgr.y (mgr.24335) 288 : cluster [DBG] pgmap v236: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:11.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:11 vm08 bash[17774]: cluster 2026-03-09T18:29:10.514714+0000 mgr.y (mgr.24335) 288 : cluster [DBG] pgmap v236: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:12.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:12 vm00 bash[17468]: audit 2026-03-09T18:29:11.554013+0000 mgr.y (mgr.24335) 289 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:12 vm00 bash[22468]: audit 2026-03-09T18:29:11.554013+0000 mgr.y (mgr.24335) 289 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:12.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:12 vm08 bash[17774]: audit 2026-03-09T18:29:11.554013+0000 mgr.y (mgr.24335) 289 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:13.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:13 vm00 bash[17468]: cluster 2026-03-09T18:29:12.515094+0000 mgr.y (mgr.24335) 290 : cluster [DBG] pgmap v237: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:13.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:13 vm00 bash[22468]: cluster 2026-03-09T18:29:12.515094+0000 mgr.y (mgr.24335) 290 : cluster [DBG] pgmap v237: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:13.880 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:13 vm00 bash[42815]: level=error ts=2026-03-09T18:29:13.527Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:29:13.880 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:13 vm00 bash[42815]: level=warn ts=2026-03-09T18:29:13.529Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:29:13.880 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:13 vm00 bash[42815]: level=warn ts=2026-03-09T18:29:13.529Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:29:13.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:13 vm08 bash[17774]: cluster 2026-03-09T18:29:12.515094+0000 mgr.y (mgr.24335) 290 : cluster [DBG] pgmap v237: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:15.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:15 vm00 bash[22468]: cluster 2026-03-09T18:29:14.515648+0000 mgr.y (mgr.24335) 291 : cluster [DBG] pgmap v238: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:15.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:15 vm00 bash[17468]: cluster 2026-03-09T18:29:14.515648+0000 mgr.y (mgr.24335) 291 : cluster [DBG] pgmap v238: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:15.880 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:15 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:29:15] "GET /metrics HTTP/1.1" 200 207629 "" "Prometheus/2.33.4" 2026-03-09T18:29:15.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:15 vm08 bash[17774]: cluster 2026-03-09T18:29:14.515648+0000 mgr.y (mgr.24335) 291 : cluster [DBG] pgmap v238: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:17.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:17 vm00 bash[17468]: cluster 2026-03-09T18:29:16.516474+0000 mgr.y (mgr.24335) 292 : cluster [DBG] pgmap v239: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:17.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:17 vm00 bash[22468]: cluster 2026-03-09T18:29:16.516474+0000 mgr.y (mgr.24335) 292 : cluster [DBG] pgmap v239: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:17.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:17 vm08 bash[17774]: cluster 2026-03-09T18:29:16.516474+0000 mgr.y (mgr.24335) 292 : cluster [DBG] pgmap v239: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:18.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:17 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:29:17] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:29:19.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:19 vm00 bash[17468]: cluster 2026-03-09T18:29:18.516780+0000 mgr.y (mgr.24335) 293 : cluster [DBG] pgmap v240: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:19.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:19 vm00 bash[22468]: cluster 2026-03-09T18:29:18.516780+0000 mgr.y (mgr.24335) 293 : cluster [DBG] pgmap v240: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:19.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:19 vm08 bash[17774]: cluster 2026-03-09T18:29:18.516780+0000 mgr.y (mgr.24335) 293 : cluster [DBG] pgmap v240: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:21.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:21 vm00 bash[22468]: cluster 2026-03-09T18:29:20.517389+0000 mgr.y (mgr.24335) 294 : cluster [DBG] pgmap v241: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:21.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:21 vm00 bash[17468]: cluster 2026-03-09T18:29:20.517389+0000 mgr.y (mgr.24335) 294 : cluster [DBG] pgmap v241: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:21.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:21 vm08 bash[17774]: cluster 2026-03-09T18:29:20.517389+0000 mgr.y (mgr.24335) 294 : cluster [DBG] pgmap v241: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:22.612 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-09T18:29:22.839 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:22 vm00 bash[22468]: audit 2026-03-09T18:29:21.564423+0000 mgr.y (mgr.24335) 295 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:22.839 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:22 vm00 bash[17468]: audit 2026-03-09T18:29:21.564423+0000 mgr.y (mgr.24335) 295 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:22.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:22 vm08 bash[17774]: audit 2026-03-09T18:29:21.564423+0000 mgr.y (mgr.24335) 295 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:23.029 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:29:23.030 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (6m) 2m ago 6m 16.2M - ba2b418f427c 941abbc9e671 2026-03-09T18:29:23.030 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (6m) 101s ago 6m 41.1M - 8.3.5 dad864ee21e9 771af00209da 2026-03-09T18:29:23.030 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (6m) 2m ago 6m 42.2M - 3.5 e1d6a67b021e d1efcd22ebcc 2026-03-09T18:29:23.030 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443,9283 running (104s) 101s ago 9m 276M - 19.2.3-678-ge911bdeb 654f31e6858e c24396cb6839 2026-03-09T18:29:23.030 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:9283 running (10m) 2m ago 10m 447M - 17.2.0 e1d6a67b021e 67bec09a4a4c 2026-03-09T18:29:23.030 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (10m) 2m ago 10m 47.2M 2048M 17.2.0 e1d6a67b021e 819e8890799a 2026-03-09T18:29:23.030 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (9m) 101s ago 9m 38.4M 2048M 17.2.0 e1d6a67b021e 5b51a6d0bbdd 2026-03-09T18:29:23.030 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (9m) 2m ago 9m 35.6M 2048M 17.2.0 e1d6a67b021e a82073bc5d9c 2026-03-09T18:29:23.030 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (6m) 2m ago 6m 7743k - 1dbe0e931976 980c035e4ada 2026-03-09T18:29:23.030 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (6m) 101s ago 6m 8599k - 1dbe0e931976 bba8a2ca502c 2026-03-09T18:29:23.030 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (9m) 2m ago 9m 47.8M 4096M 17.2.0 e1d6a67b021e ab692a994bc3 2026-03-09T18:29:23.030 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (8m) 2m ago 8m 48.7M 4096M 17.2.0 e1d6a67b021e d8607e6df9d6 2026-03-09T18:29:23.030 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (8m) 2m ago 8m 44.2M 4096M 17.2.0 e1d6a67b021e 35e072ab4c22 2026-03-09T18:29:23.030 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (8m) 2m ago 8m 46.4M 4096M 17.2.0 e1d6a67b021e 306d680cc55b 2026-03-09T18:29:23.030 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (8m) 101s ago 8m 47.3M 4096M 17.2.0 e1d6a67b021e 781045b06a16 2026-03-09T18:29:23.030 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (7m) 101s ago 7m 46.8M 4096M 17.2.0 e1d6a67b021e 28b71e1b7c1b 2026-03-09T18:29:23.030 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (7m) 101s ago 7m 46.6M 4096M 17.2.0 e1d6a67b021e 80e1a58dd2f5 2026-03-09T18:29:23.030 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (7m) 101s ago 7m 46.1M 4096M 17.2.0 e1d6a67b021e 4f91765b51cf 2026-03-09T18:29:23.030 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (6m) 101s ago 6m 50.8M - 514e6a882f6e 4ab95bb45c38 2026-03-09T18:29:23.030 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (6m) 2m ago 6m 82.3M - 17.2.0 e1d6a67b021e 671fa80b7e00 2026-03-09T18:29:23.030 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (6m) 101s ago 6m 82.4M - 17.2.0 e1d6a67b021e 1fbcce983317 2026-03-09T18:29:23.087 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions' 2026-03-09T18:29:23.547 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:29:23.547 INFO:teuthology.orchestra.run.vm00.stdout: "mon": { 2026-03-09T18:29:23.547 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-09T18:29:23.547 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:29:23.547 INFO:teuthology.orchestra.run.vm00.stdout: "mgr": { 2026-03-09T18:29:23.547 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 1, 2026-03-09T18:29:23.547 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 1 2026-03-09T18:29:23.547 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:29:23.547 INFO:teuthology.orchestra.run.vm00.stdout: "osd": { 2026-03-09T18:29:23.548 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-09T18:29:23.548 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:29:23.548 INFO:teuthology.orchestra.run.vm00.stdout: "mds": {}, 2026-03-09T18:29:23.548 INFO:teuthology.orchestra.run.vm00.stdout: "rgw": { 2026-03-09T18:29:23.548 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-09T18:29:23.548 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:29:23.548 INFO:teuthology.orchestra.run.vm00.stdout: "overall": { 2026-03-09T18:29:23.548 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 14, 2026-03-09T18:29:23.548 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 1 2026-03-09T18:29:23.548 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:29:23.548 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:29:23.606 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph -s' 2026-03-09T18:29:23.779 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:23 vm00 bash[22468]: cluster 2026-03-09T18:29:22.517760+0000 mgr.y (mgr.24335) 296 : cluster [DBG] pgmap v242: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:23.779 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:23 vm00 bash[22468]: audit 2026-03-09T18:29:23.027329+0000 mgr.y (mgr.24335) 297 : audit [DBG] from='client.24754 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:29:23.779 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:23 vm00 bash[22468]: audit 2026-03-09T18:29:23.549787+0000 mon.a (mon.0) 751 : audit [DBG] from='client.? 192.168.123.100:0/1742027438' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:29:23.779 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:23 vm00 bash[17468]: cluster 2026-03-09T18:29:22.517760+0000 mgr.y (mgr.24335) 296 : cluster [DBG] pgmap v242: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:23.779 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:23 vm00 bash[17468]: audit 2026-03-09T18:29:23.027329+0000 mgr.y (mgr.24335) 297 : audit [DBG] from='client.24754 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:29:23.779 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:23 vm00 bash[17468]: audit 2026-03-09T18:29:23.549787+0000 mon.a (mon.0) 751 : audit [DBG] from='client.? 192.168.123.100:0/1742027438' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:29:23.779 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:23 vm00 bash[42815]: level=error ts=2026-03-09T18:29:23.528Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:29:23.779 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:23 vm00 bash[42815]: level=warn ts=2026-03-09T18:29:23.531Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:29:23.779 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:23 vm00 bash[42815]: level=warn ts=2026-03-09T18:29:23.531Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:29:23.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:23 vm08 bash[17774]: cluster 2026-03-09T18:29:22.517760+0000 mgr.y (mgr.24335) 296 : cluster [DBG] pgmap v242: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:29:23.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:23 vm08 bash[17774]: audit 2026-03-09T18:29:23.027329+0000 mgr.y (mgr.24335) 297 : audit [DBG] from='client.24754 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:29:23.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:23 vm08 bash[17774]: audit 2026-03-09T18:29:23.549787+0000 mon.a (mon.0) 751 : audit [DBG] from='client.? 192.168.123.100:0/1742027438' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:29:24.080 INFO:teuthology.orchestra.run.vm00.stdout: cluster: 2026-03-09T18:29:24.080 INFO:teuthology.orchestra.run.vm00.stdout: id: 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:29:24.080 INFO:teuthology.orchestra.run.vm00.stdout: health: HEALTH_OK 2026-03-09T18:29:24.080 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:29:24.080 INFO:teuthology.orchestra.run.vm00.stdout: services: 2026-03-09T18:29:24.080 INFO:teuthology.orchestra.run.vm00.stdout: mon: 3 daemons, quorum a,c,b (age 9m) 2026-03-09T18:29:24.080 INFO:teuthology.orchestra.run.vm00.stdout: mgr: y(active, since 6m), standbys: x 2026-03-09T18:29:24.080 INFO:teuthology.orchestra.run.vm00.stdout: osd: 8 osds: 8 up (since 7m), 8 in (since 7m) 2026-03-09T18:29:24.080 INFO:teuthology.orchestra.run.vm00.stdout: rgw: 2 daemons active (2 hosts, 1 zones) 2026-03-09T18:29:24.080 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:29:24.080 INFO:teuthology.orchestra.run.vm00.stdout: data: 2026-03-09T18:29:24.080 INFO:teuthology.orchestra.run.vm00.stdout: pools: 6 pools, 161 pgs 2026-03-09T18:29:24.080 INFO:teuthology.orchestra.run.vm00.stdout: objects: 209 objects, 457 KiB 2026-03-09T18:29:24.080 INFO:teuthology.orchestra.run.vm00.stdout: usage: 72 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:29:24.080 INFO:teuthology.orchestra.run.vm00.stdout: pgs: 161 active+clean 2026-03-09T18:29:24.080 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:29:24.080 INFO:teuthology.orchestra.run.vm00.stdout: io: 2026-03-09T18:29:24.081 INFO:teuthology.orchestra.run.vm00.stdout: client: 853 B/s rd, 0 op/s rd, 0 op/s wr 2026-03-09T18:29:24.081 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:29:24.131 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph health detail' 2026-03-09T18:29:24.579 INFO:teuthology.orchestra.run.vm00.stdout:HEALTH_OK 2026-03-09T18:29:24.645 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.mgr | length == 2'"'"'' 2026-03-09T18:29:24.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:24 vm00 bash[17468]: audit 2026-03-09T18:29:24.082901+0000 mon.a (mon.0) 752 : audit [DBG] from='client.? 192.168.123.100:0/1486448815' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T18:29:24.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:24 vm00 bash[17468]: audit 2026-03-09T18:29:24.581334+0000 mon.a (mon.0) 753 : audit [DBG] from='client.? 192.168.123.100:0/1631380976' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:29:24.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:24 vm00 bash[22468]: audit 2026-03-09T18:29:24.082901+0000 mon.a (mon.0) 752 : audit [DBG] from='client.? 192.168.123.100:0/1486448815' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T18:29:24.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:24 vm00 bash[22468]: audit 2026-03-09T18:29:24.581334+0000 mon.a (mon.0) 753 : audit [DBG] from='client.? 192.168.123.100:0/1631380976' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:29:24.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:24 vm08 bash[17774]: audit 2026-03-09T18:29:24.082901+0000 mon.a (mon.0) 752 : audit [DBG] from='client.? 192.168.123.100:0/1486448815' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T18:29:24.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:24 vm08 bash[17774]: audit 2026-03-09T18:29:24.581334+0000 mon.a (mon.0) 753 : audit [DBG] from='client.? 192.168.123.100:0/1631380976' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:29:25.159 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:29:25.204 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph mgr fail' 2026-03-09T18:29:25.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:25 vm00 bash[22468]: cluster 2026-03-09T18:29:24.518297+0000 mgr.y (mgr.24335) 298 : cluster [DBG] pgmap v243: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:25.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:25 vm00 bash[22468]: audit 2026-03-09T18:29:25.150581+0000 mon.a (mon.0) 754 : audit [DBG] from='client.? 192.168.123.100:0/390213259' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:29:25.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:25 vm00 bash[17468]: cluster 2026-03-09T18:29:24.518297+0000 mgr.y (mgr.24335) 298 : cluster [DBG] pgmap v243: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:25.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:25 vm00 bash[17468]: audit 2026-03-09T18:29:25.150581+0000 mon.a (mon.0) 754 : audit [DBG] from='client.? 192.168.123.100:0/390213259' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:29:25.880 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:25 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:29:25] "GET /metrics HTTP/1.1" 200 207629 "" "Prometheus/2.33.4" 2026-03-09T18:29:25.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:25 vm08 bash[17774]: cluster 2026-03-09T18:29:24.518297+0000 mgr.y (mgr.24335) 298 : cluster [DBG] pgmap v243: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:25.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:25 vm08 bash[17774]: audit 2026-03-09T18:29:25.150581+0000 mon.a (mon.0) 754 : audit [DBG] from='client.? 192.168.123.100:0/390213259' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:29:26.741 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'sleep 180' 2026-03-09T18:29:26.897 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:26 vm00 bash[22468]: audit 2026-03-09T18:29:25.683701+0000 mon.c (mon.1) 133 : audit [INF] from='client.? 192.168.123.100:0/2200771363' entity='client.admin' cmd=[{"prefix": "mgr fail"}]: dispatch 2026-03-09T18:29:26.897 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:26 vm00 bash[22468]: audit 2026-03-09T18:29:25.684362+0000 mon.a (mon.0) 755 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "mgr fail"}]: dispatch 2026-03-09T18:29:26.897 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:26 vm00 bash[22468]: cluster 2026-03-09T18:29:25.958311+0000 mon.a (mon.0) 756 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-09T18:29:26.897 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:26 vm00 bash[22468]: cluster 2026-03-09T18:29:26.464773+0000 mon.a (mon.0) 757 : cluster [DBG] Standby manager daemon y started 2026-03-09T18:29:26.897 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:26 vm00 bash[17468]: audit 2026-03-09T18:29:25.683701+0000 mon.c (mon.1) 133 : audit [INF] from='client.? 192.168.123.100:0/2200771363' entity='client.admin' cmd=[{"prefix": "mgr fail"}]: dispatch 2026-03-09T18:29:26.897 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:26 vm00 bash[17468]: audit 2026-03-09T18:29:25.684362+0000 mon.a (mon.0) 755 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "mgr fail"}]: dispatch 2026-03-09T18:29:26.897 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:26 vm00 bash[17468]: cluster 2026-03-09T18:29:25.958311+0000 mon.a (mon.0) 756 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-09T18:29:26.897 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:26 vm00 bash[17468]: cluster 2026-03-09T18:29:26.464773+0000 mon.a (mon.0) 757 : cluster [DBG] Standby manager daemon y started 2026-03-09T18:29:26.897 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:26 vm00 bash[17744]: debug ignoring --setuser ceph since I am not root 2026-03-09T18:29:26.897 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:26 vm00 bash[17744]: ignoring --setgroup ceph since I am not root 2026-03-09T18:29:26.897 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:26 vm00 bash[17744]: debug 2026-03-09T18:29:26.709+0000 7f369a11d700 1 -- 192.168.123.100:0/3389935204 <== mon.0 v2:192.168.123.100:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 194+0+0 (secure 0 0 0) 0x562e3535a340 con 0x562e360d6c00 2026-03-09T18:29:26.897 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:26 vm00 bash[17744]: debug 2026-03-09T18:29:26.801+0000 7f36a2b79000 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T18:29:26.897 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:26 vm00 bash[17744]: debug 2026-03-09T18:29:26.873+0000 7f36a2b79000 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T18:29:26.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:26 vm08 bash[17774]: audit 2026-03-09T18:29:25.683701+0000 mon.c (mon.1) 133 : audit [INF] from='client.? 192.168.123.100:0/2200771363' entity='client.admin' cmd=[{"prefix": "mgr fail"}]: dispatch 2026-03-09T18:29:26.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:26 vm08 bash[17774]: audit 2026-03-09T18:29:25.684362+0000 mon.a (mon.0) 755 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "mgr fail"}]: dispatch 2026-03-09T18:29:26.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:26 vm08 bash[17774]: cluster 2026-03-09T18:29:25.958311+0000 mon.a (mon.0) 756 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-09T18:29:26.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:26 vm08 bash[17774]: cluster 2026-03-09T18:29:26.464773+0000 mon.a (mon.0) 757 : cluster [DBG] Standby manager daemon y started 2026-03-09T18:29:26.975 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:26 vm08 bash[36576]: [09/Mar/2026:18:29:26] ENGINE Bus STOPPING 2026-03-09T18:29:27.047 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:26 vm00 bash[42815]: level=warn ts=2026-03-09T18:29:26.948Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=5 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": dial tcp 192.168.123.108:8443: connect: connection refused" 2026-03-09T18:29:27.301 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:27 vm08 bash[36576]: [09/Mar/2026:18:29:27] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T18:29:27.301 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:27 vm08 bash[36576]: [09/Mar/2026:18:29:27] ENGINE Bus STOPPED 2026-03-09T18:29:27.301 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:27 vm08 bash[36576]: [09/Mar/2026:18:29:27] ENGINE Bus STARTING 2026-03-09T18:29:27.630 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:27 vm00 bash[17744]: debug 2026-03-09T18:29:27.253+0000 7f36a2b79000 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T18:29:27.647 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:27 vm08 bash[36576]: [09/Mar/2026:18:29:27] ENGINE Serving on http://:::9283 2026-03-09T18:29:27.647 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:27 vm08 bash[36576]: [09/Mar/2026:18:29:27] ENGINE Bus STARTED 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: audit 2026-03-09T18:29:26.664416+0000 mon.b (mon.2) 41 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: audit 2026-03-09T18:29:26.664583+0000 mon.b (mon.2) 42 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: audit 2026-03-09T18:29:26.664705+0000 mon.b (mon.2) 43 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: audit 2026-03-09T18:29:26.664854+0000 mon.b (mon.2) 44 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: audit 2026-03-09T18:29:26.665033+0000 mon.b (mon.2) 45 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: audit 2026-03-09T18:29:26.665315+0000 mon.a (mon.0) 758 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: audit 2026-03-09T18:29:26.665384+0000 mon.b (mon.2) 46 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: cluster 2026-03-09T18:29:26.665419+0000 mon.a (mon.0) 759 : cluster [DBG] mgrmap e22: x(active, starting, since 0.952901s), standbys: y 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: audit 2026-03-09T18:29:26.665574+0000 mon.b (mon.2) 47 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: audit 2026-03-09T18:29:26.665725+0000 mon.b (mon.2) 48 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: audit 2026-03-09T18:29:26.666049+0000 mon.b (mon.2) 49 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: audit 2026-03-09T18:29:26.666234+0000 mon.b (mon.2) 50 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: audit 2026-03-09T18:29:26.666398+0000 mon.b (mon.2) 51 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: audit 2026-03-09T18:29:26.666537+0000 mon.b (mon.2) 52 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: audit 2026-03-09T18:29:26.666682+0000 mon.b (mon.2) 53 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: audit 2026-03-09T18:29:26.667834+0000 mon.b (mon.2) 54 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: audit 2026-03-09T18:29:26.668097+0000 mon.b (mon.2) 55 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: audit 2026-03-09T18:29:26.668780+0000 mon.b (mon.2) 56 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: cluster 2026-03-09T18:29:27.026407+0000 mon.a (mon.0) 760 : cluster [INF] Manager daemon x is now available 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: audit 2026-03-09T18:29:27.040528+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: cephadm 2026-03-09T18:29:27.041041+0000 mgr.x (mgr.24751) 1 : cephadm [INF] Queued rgw.foo for migration 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: cephadm 2026-03-09T18:29:27.041628+0000 mgr.x (mgr.24751) 2 : cephadm [INF] No Migration is needed for rgw spec: {'placement': {'count': 2}, 'service_id': 'foo', 'service_name': 'rgw.foo', 'service_type': 'rgw', 'spec': {'rgw_frontend_port': 8000, 'rgw_realm': 'r', 'rgw_zone': 'z'}} 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: audit 2026-03-09T18:29:27.048246+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: cephadm 2026-03-09T18:29:27.049426+0000 mgr.x (mgr.24751) 3 : cephadm [INF] Migrating certs/keys for iscsi.foo spec to cert store 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: cephadm 2026-03-09T18:29:27.049494+0000 mgr.x (mgr.24751) 4 : cephadm [INF] Migrating certs/keys for rgw.foo spec to cert store 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: cephadm 2026-03-09T18:29:27.049685+0000 mgr.x (mgr.24751) 5 : cephadm [INF] Checking for cert/key for grafana.a 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: audit 2026-03-09T18:29:27.062287+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: audit 2026-03-09T18:29:27.088132+0000 mon.b (mon.2) 57 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: audit 2026-03-09T18:29:27.090546+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.24751 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: audit 2026-03-09T18:29:27.093122+0000 mon.b (mon.2) 58 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: audit 2026-03-09T18:29:27.098033+0000 mon.b (mon.2) 59 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: audit 2026-03-09T18:29:27.131419+0000 mon.b (mon.2) 60 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: audit 2026-03-09T18:29:27.134013+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.24751 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: audit 2026-03-09T18:29:27.566406+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:27 vm08 bash[17774]: audit 2026-03-09T18:29:27.575912+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: audit 2026-03-09T18:29:26.664416+0000 mon.b (mon.2) 41 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: audit 2026-03-09T18:29:26.664583+0000 mon.b (mon.2) 42 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: audit 2026-03-09T18:29:26.664705+0000 mon.b (mon.2) 43 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: audit 2026-03-09T18:29:26.664854+0000 mon.b (mon.2) 44 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: audit 2026-03-09T18:29:26.665033+0000 mon.b (mon.2) 45 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: audit 2026-03-09T18:29:26.665315+0000 mon.a (mon.0) 758 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: audit 2026-03-09T18:29:26.665384+0000 mon.b (mon.2) 46 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: cluster 2026-03-09T18:29:26.665419+0000 mon.a (mon.0) 759 : cluster [DBG] mgrmap e22: x(active, starting, since 0.952901s), standbys: y 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: audit 2026-03-09T18:29:26.665574+0000 mon.b (mon.2) 47 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:29:27.975 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: audit 2026-03-09T18:29:26.665725+0000 mon.b (mon.2) 48 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: audit 2026-03-09T18:29:26.666049+0000 mon.b (mon.2) 49 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: audit 2026-03-09T18:29:26.666234+0000 mon.b (mon.2) 50 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: audit 2026-03-09T18:29:26.666398+0000 mon.b (mon.2) 51 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: audit 2026-03-09T18:29:26.666537+0000 mon.b (mon.2) 52 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: audit 2026-03-09T18:29:26.666682+0000 mon.b (mon.2) 53 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: audit 2026-03-09T18:29:26.667834+0000 mon.b (mon.2) 54 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: audit 2026-03-09T18:29:26.668097+0000 mon.b (mon.2) 55 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: audit 2026-03-09T18:29:26.668780+0000 mon.b (mon.2) 56 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: cluster 2026-03-09T18:29:27.026407+0000 mon.a (mon.0) 760 : cluster [INF] Manager daemon x is now available 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: audit 2026-03-09T18:29:27.040528+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: audit 2026-03-09T18:29:26.664416+0000 mon.b (mon.2) 41 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: audit 2026-03-09T18:29:26.664583+0000 mon.b (mon.2) 42 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: audit 2026-03-09T18:29:26.664705+0000 mon.b (mon.2) 43 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: audit 2026-03-09T18:29:26.664854+0000 mon.b (mon.2) 44 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: audit 2026-03-09T18:29:26.665033+0000 mon.b (mon.2) 45 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: audit 2026-03-09T18:29:26.665315+0000 mon.a (mon.0) 758 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: audit 2026-03-09T18:29:26.665384+0000 mon.b (mon.2) 46 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: cluster 2026-03-09T18:29:26.665419+0000 mon.a (mon.0) 759 : cluster [DBG] mgrmap e22: x(active, starting, since 0.952901s), standbys: y 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: audit 2026-03-09T18:29:26.665574+0000 mon.b (mon.2) 47 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: audit 2026-03-09T18:29:26.665725+0000 mon.b (mon.2) 48 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: audit 2026-03-09T18:29:26.666049+0000 mon.b (mon.2) 49 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: audit 2026-03-09T18:29:26.666234+0000 mon.b (mon.2) 50 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: audit 2026-03-09T18:29:26.666398+0000 mon.b (mon.2) 51 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: audit 2026-03-09T18:29:26.666537+0000 mon.b (mon.2) 52 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: audit 2026-03-09T18:29:26.666682+0000 mon.b (mon.2) 53 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: audit 2026-03-09T18:29:26.667834+0000 mon.b (mon.2) 54 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: audit 2026-03-09T18:29:26.668097+0000 mon.b (mon.2) 55 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: audit 2026-03-09T18:29:26.668780+0000 mon.b (mon.2) 56 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: cluster 2026-03-09T18:29:27.026407+0000 mon.a (mon.0) 760 : cluster [INF] Manager daemon x is now available 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: audit 2026-03-09T18:29:27.040528+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: cephadm 2026-03-09T18:29:27.041041+0000 mgr.x (mgr.24751) 1 : cephadm [INF] Queued rgw.foo for migration 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: cephadm 2026-03-09T18:29:27.041628+0000 mgr.x (mgr.24751) 2 : cephadm [INF] No Migration is needed for rgw spec: {'placement': {'count': 2}, 'service_id': 'foo', 'service_name': 'rgw.foo', 'service_type': 'rgw', 'spec': {'rgw_frontend_port': 8000, 'rgw_realm': 'r', 'rgw_zone': 'z'}} 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: audit 2026-03-09T18:29:27.048246+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: cephadm 2026-03-09T18:29:27.049426+0000 mgr.x (mgr.24751) 3 : cephadm [INF] Migrating certs/keys for iscsi.foo spec to cert store 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: cephadm 2026-03-09T18:29:27.049494+0000 mgr.x (mgr.24751) 4 : cephadm [INF] Migrating certs/keys for rgw.foo spec to cert store 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: cephadm 2026-03-09T18:29:27.049685+0000 mgr.x (mgr.24751) 5 : cephadm [INF] Checking for cert/key for grafana.a 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: audit 2026-03-09T18:29:27.062287+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: audit 2026-03-09T18:29:27.088132+0000 mon.b (mon.2) 57 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: audit 2026-03-09T18:29:27.090546+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.24751 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: audit 2026-03-09T18:29:27.093122+0000 mon.b (mon.2) 58 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: audit 2026-03-09T18:29:27.098033+0000 mon.b (mon.2) 59 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: audit 2026-03-09T18:29:27.131419+0000 mon.b (mon.2) 60 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: audit 2026-03-09T18:29:27.134013+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.24751 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: audit 2026-03-09T18:29:27.566406+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:27 vm00 bash[22468]: audit 2026-03-09T18:29:27.575912+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: cephadm 2026-03-09T18:29:27.041041+0000 mgr.x (mgr.24751) 1 : cephadm [INF] Queued rgw.foo for migration 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: cephadm 2026-03-09T18:29:27.041628+0000 mgr.x (mgr.24751) 2 : cephadm [INF] No Migration is needed for rgw spec: {'placement': {'count': 2}, 'service_id': 'foo', 'service_name': 'rgw.foo', 'service_type': 'rgw', 'spec': {'rgw_frontend_port': 8000, 'rgw_realm': 'r', 'rgw_zone': 'z'}} 2026-03-09T18:29:27.976 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: audit 2026-03-09T18:29:27.048246+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:27.977 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: cephadm 2026-03-09T18:29:27.049426+0000 mgr.x (mgr.24751) 3 : cephadm [INF] Migrating certs/keys for iscsi.foo spec to cert store 2026-03-09T18:29:27.977 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: cephadm 2026-03-09T18:29:27.049494+0000 mgr.x (mgr.24751) 4 : cephadm [INF] Migrating certs/keys for rgw.foo spec to cert store 2026-03-09T18:29:27.977 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: cephadm 2026-03-09T18:29:27.049685+0000 mgr.x (mgr.24751) 5 : cephadm [INF] Checking for cert/key for grafana.a 2026-03-09T18:29:27.977 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: audit 2026-03-09T18:29:27.062287+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:27.977 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: audit 2026-03-09T18:29:27.088132+0000 mon.b (mon.2) 57 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:29:27.977 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: audit 2026-03-09T18:29:27.090546+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.24751 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:29:27.977 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: audit 2026-03-09T18:29:27.093122+0000 mon.b (mon.2) 58 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:29:27.977 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: audit 2026-03-09T18:29:27.098033+0000 mon.b (mon.2) 59 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:29:27.977 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: audit 2026-03-09T18:29:27.131419+0000 mon.b (mon.2) 60 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-09T18:29:27.977 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: audit 2026-03-09T18:29:27.134013+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.24751 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-09T18:29:27.977 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: audit 2026-03-09T18:29:27.566406+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:27.977 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[17468]: audit 2026-03-09T18:29:27.575912+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:27.977 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:27 vm00 bash[17744]: debug 2026-03-09T18:29:27.869+0000 7f36a2b79000 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T18:29:28.314 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:27 vm00 bash[17744]: debug 2026-03-09T18:29:27.973+0000 7f36a2b79000 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T18:29:28.314 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:28 vm00 bash[17744]: debug 2026-03-09T18:29:28.205+0000 7f36a2b79000 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T18:29:28.314 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:28 vm00 bash[17744]: debug 2026-03-09T18:29:28.309+0000 7f36a2b79000 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T18:29:28.314 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:27 vm00 bash[42815]: level=warn ts=2026-03-09T18:29:27.979Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=5 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": dial tcp 192.168.123.100:8443: connect: connection refused" 2026-03-09T18:29:28.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:28 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:29:28] "GET /metrics HTTP/1.1" 200 34728 "" "Prometheus/2.33.4" 2026-03-09T18:29:28.612 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:28 vm00 bash[17744]: debug 2026-03-09T18:29:28.373+0000 7f36a2b79000 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T18:29:28.612 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:28 vm00 bash[17744]: debug 2026-03-09T18:29:28.533+0000 7f36a2b79000 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T18:29:28.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:28 vm00 bash[17468]: cluster 2026-03-09T18:29:27.685558+0000 mon.a (mon.0) 768 : cluster [DBG] mgrmap e23: x(active, since 1.97304s), standbys: y 2026-03-09T18:29:28.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:28 vm00 bash[17468]: cluster 2026-03-09T18:29:27.711683+0000 mgr.x (mgr.24751) 6 : cluster [DBG] pgmap v3: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:29:28.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:28 vm00 bash[17468]: cephadm 2026-03-09T18:29:27.744528+0000 mgr.x (mgr.24751) 7 : cephadm [INF] Deploying cephadm binary to vm00 2026-03-09T18:29:28.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:28 vm00 bash[17468]: cephadm 2026-03-09T18:29:28.167047+0000 mgr.x (mgr.24751) 8 : cephadm [INF] Deploying cephadm binary to vm08 2026-03-09T18:29:28.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:28 vm00 bash[17468]: cephadm 2026-03-09T18:29:28.387483+0000 mgr.x (mgr.24751) 9 : cephadm [INF] [09/Mar/2026:18:29:28] ENGINE Bus STARTING 2026-03-09T18:29:28.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:28 vm00 bash[22468]: cluster 2026-03-09T18:29:27.685558+0000 mon.a (mon.0) 768 : cluster [DBG] mgrmap e23: x(active, since 1.97304s), standbys: y 2026-03-09T18:29:28.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:28 vm00 bash[22468]: cluster 2026-03-09T18:29:27.711683+0000 mgr.x (mgr.24751) 6 : cluster [DBG] pgmap v3: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:29:28.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:28 vm00 bash[22468]: cephadm 2026-03-09T18:29:27.744528+0000 mgr.x (mgr.24751) 7 : cephadm [INF] Deploying cephadm binary to vm00 2026-03-09T18:29:28.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:28 vm00 bash[22468]: cephadm 2026-03-09T18:29:28.167047+0000 mgr.x (mgr.24751) 8 : cephadm [INF] Deploying cephadm binary to vm08 2026-03-09T18:29:28.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:28 vm00 bash[22468]: cephadm 2026-03-09T18:29:28.387483+0000 mgr.x (mgr.24751) 9 : cephadm [INF] [09/Mar/2026:18:29:28] ENGINE Bus STARTING 2026-03-09T18:29:28.880 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:28 vm00 bash[17744]: debug 2026-03-09T18:29:28.609+0000 7f36a2b79000 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T18:29:28.880 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:28 vm00 bash[17744]: debug 2026-03-09T18:29:28.713+0000 7f36a2b79000 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T18:29:28.880 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:28 vm00 bash[42815]: level=warn ts=2026-03-09T18:29:28.702Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=6 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:29:28.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:28 vm08 bash[17774]: cluster 2026-03-09T18:29:27.685558+0000 mon.a (mon.0) 768 : cluster [DBG] mgrmap e23: x(active, since 1.97304s), standbys: y 2026-03-09T18:29:28.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:28 vm08 bash[17774]: cluster 2026-03-09T18:29:27.711683+0000 mgr.x (mgr.24751) 6 : cluster [DBG] pgmap v3: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:29:28.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:28 vm08 bash[17774]: cephadm 2026-03-09T18:29:27.744528+0000 mgr.x (mgr.24751) 7 : cephadm [INF] Deploying cephadm binary to vm00 2026-03-09T18:29:28.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:28 vm08 bash[17774]: cephadm 2026-03-09T18:29:28.167047+0000 mgr.x (mgr.24751) 8 : cephadm [INF] Deploying cephadm binary to vm08 2026-03-09T18:29:28.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:28 vm08 bash[17774]: cephadm 2026-03-09T18:29:28.387483+0000 mgr.x (mgr.24751) 9 : cephadm [INF] [09/Mar/2026:18:29:28] ENGINE Bus STARTING 2026-03-09T18:29:28.974 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:28 vm08 bash[33963]: ts=2026-03-09T18:29:28.659Z caller=manager.go:609 level=warn component="rule manager" group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on(ceph_daemon) group_left(hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: |\n OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked down and back up at {{ $value | humanize }} times once a minute for 5 minutes. This could indicate a network issue (latency, packet drop, disruption) on the clusters \"cluster network\". Check the network environment on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSD's to flap (mark each other out)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.100:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:29:28.975 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:28 vm08 bash[33963]: ts=2026-03-09T18:29:28.659Z caller=manager.go:609 level=warn component="rule manager" group=osd msg="Evaluating rule failed" rule="alert: CephPGImbalance\nexpr: abs(((ceph_osd_numpg > 0) - on(job) group_left() avg by(job) (ceph_osd_numpg\n > 0)) / on(job) group_left() avg by(job) (ceph_osd_numpg > 0)) * on(ceph_daemon)\n group_left(hostname) ceph_osd_metadata > 0.3\nfor: 5m\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.5\n severity: warning\n type: ceph_default\nannotations:\n description: |\n OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} deviates by more than 30% from average PG count.\n summary: PG allocations are not balanced across devices\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.100:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:29:29.630 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:29 vm00 bash[17744]: debug 2026-03-09T18:29:29.285+0000 7f36a2b79000 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T18:29:29.630 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:29 vm00 bash[17744]: debug 2026-03-09T18:29:29.345+0000 7f36a2b79000 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T18:29:29.630 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:29 vm00 bash[17744]: debug 2026-03-09T18:29:29.405+0000 7f36a2b79000 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T18:29:29.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:29 vm08 bash[17774]: cephadm 2026-03-09T18:29:28.528424+0000 mgr.x (mgr.24751) 10 : cephadm [INF] [09/Mar/2026:18:29:28] ENGINE Serving on https://192.168.123.108:7150 2026-03-09T18:29:29.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:29 vm08 bash[17774]: cephadm 2026-03-09T18:29:28.536854+0000 mgr.x (mgr.24751) 11 : cephadm [INF] [09/Mar/2026:18:29:28] ENGINE Client ('192.168.123.108', 58994) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:29:29.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:29 vm08 bash[17774]: cephadm 2026-03-09T18:29:28.637751+0000 mgr.x (mgr.24751) 12 : cephadm [INF] [09/Mar/2026:18:29:28] ENGINE Serving on http://192.168.123.108:8765 2026-03-09T18:29:29.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:29 vm08 bash[17774]: cephadm 2026-03-09T18:29:28.637789+0000 mgr.x (mgr.24751) 13 : cephadm [INF] [09/Mar/2026:18:29:28] ENGINE Bus STARTED 2026-03-09T18:29:29.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:29 vm08 bash[17774]: cluster 2026-03-09T18:29:28.667912+0000 mgr.x (mgr.24751) 14 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:29:29.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:29 vm08 bash[17774]: cluster 2026-03-09T18:29:28.704174+0000 mon.a (mon.0) 769 : cluster [DBG] mgrmap e24: x(active, since 2s), standbys: y 2026-03-09T18:29:30.010 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:29 vm00 bash[22468]: cephadm 2026-03-09T18:29:28.528424+0000 mgr.x (mgr.24751) 10 : cephadm [INF] [09/Mar/2026:18:29:28] ENGINE Serving on https://192.168.123.108:7150 2026-03-09T18:29:30.011 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:29 vm00 bash[22468]: cephadm 2026-03-09T18:29:28.536854+0000 mgr.x (mgr.24751) 11 : cephadm [INF] [09/Mar/2026:18:29:28] ENGINE Client ('192.168.123.108', 58994) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:29:30.011 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:29 vm00 bash[22468]: cephadm 2026-03-09T18:29:28.637751+0000 mgr.x (mgr.24751) 12 : cephadm [INF] [09/Mar/2026:18:29:28] ENGINE Serving on http://192.168.123.108:8765 2026-03-09T18:29:30.011 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:29 vm00 bash[22468]: cephadm 2026-03-09T18:29:28.637789+0000 mgr.x (mgr.24751) 13 : cephadm [INF] [09/Mar/2026:18:29:28] ENGINE Bus STARTED 2026-03-09T18:29:30.011 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:29 vm00 bash[22468]: cluster 2026-03-09T18:29:28.667912+0000 mgr.x (mgr.24751) 14 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:29:30.011 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:29 vm00 bash[22468]: cluster 2026-03-09T18:29:28.704174+0000 mon.a (mon.0) 769 : cluster [DBG] mgrmap e24: x(active, since 2s), standbys: y 2026-03-09T18:29:30.011 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:29 vm00 bash[17468]: cephadm 2026-03-09T18:29:28.528424+0000 mgr.x (mgr.24751) 10 : cephadm [INF] [09/Mar/2026:18:29:28] ENGINE Serving on https://192.168.123.108:7150 2026-03-09T18:29:30.011 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:29 vm00 bash[17468]: cephadm 2026-03-09T18:29:28.536854+0000 mgr.x (mgr.24751) 11 : cephadm [INF] [09/Mar/2026:18:29:28] ENGINE Client ('192.168.123.108', 58994) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:29:30.011 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:29 vm00 bash[17468]: cephadm 2026-03-09T18:29:28.637751+0000 mgr.x (mgr.24751) 12 : cephadm [INF] [09/Mar/2026:18:29:28] ENGINE Serving on http://192.168.123.108:8765 2026-03-09T18:29:30.011 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:29 vm00 bash[17468]: cephadm 2026-03-09T18:29:28.637789+0000 mgr.x (mgr.24751) 13 : cephadm [INF] [09/Mar/2026:18:29:28] ENGINE Bus STARTED 2026-03-09T18:29:30.011 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:29 vm00 bash[17468]: cluster 2026-03-09T18:29:28.667912+0000 mgr.x (mgr.24751) 14 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:29:30.011 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:29 vm00 bash[17468]: cluster 2026-03-09T18:29:28.704174+0000 mon.a (mon.0) 769 : cluster [DBG] mgrmap e24: x(active, since 2s), standbys: y 2026-03-09T18:29:30.011 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:29 vm00 bash[17744]: debug 2026-03-09T18:29:29.757+0000 7f36a2b79000 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T18:29:30.011 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:29 vm00 bash[17744]: debug 2026-03-09T18:29:29.829+0000 7f36a2b79000 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T18:29:30.011 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:29 vm00 bash[17744]: debug 2026-03-09T18:29:29.901+0000 7f36a2b79000 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T18:29:30.011 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:30 vm00 bash[17744]: debug 2026-03-09T18:29:30.009+0000 7f36a2b79000 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:29:30.618 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:30 vm00 bash[17744]: debug 2026-03-09T18:29:30.341+0000 7f36a2b79000 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T18:29:30.618 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:30 vm00 bash[17744]: debug 2026-03-09T18:29:30.549+0000 7f36a2b79000 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T18:29:30.618 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:30 vm00 bash[17744]: debug 2026-03-09T18:29:30.613+0000 7f36a2b79000 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T18:29:30.869 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:30 vm00 bash[17744]: debug 2026-03-09T18:29:30.681+0000 7f36a2b79000 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T18:29:31.729 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:30 vm00 bash[17744]: debug 2026-03-09T18:29:30.865+0000 7f36a2b79000 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:29:31.730 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:31 vm00 bash[17744]: debug 2026-03-09T18:29:31.405+0000 7f36a2b79000 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T18:29:31.730 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:31 vm00 bash[17744]: [09/Mar/2026:18:29:31] ENGINE Bus STARTING 2026-03-09T18:29:31.730 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:31 vm00 bash[17744]: CherryPy Checker: 2026-03-09T18:29:31.730 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:31 vm00 bash[17744]: The Application mounted at '' has an empty config. 2026-03-09T18:29:31.730 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:31 vm00 bash[17744]: [09/Mar/2026:18:29:31] ENGINE Serving on http://:::9283 2026-03-09T18:29:31.730 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:31 vm00 bash[17744]: [09/Mar/2026:18:29:31] ENGINE Bus STARTED 2026-03-09T18:29:32.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:31 vm00 bash[17468]: cluster 2026-03-09T18:29:30.668240+0000 mgr.x (mgr.24751) 15 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:29:32.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:31 vm00 bash[17468]: cluster 2026-03-09T18:29:30.739805+0000 mon.a (mon.0) 770 : cluster [DBG] mgrmap e25: x(active, since 5s), standbys: y 2026-03-09T18:29:32.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:31 vm00 bash[17468]: cluster 2026-03-09T18:29:31.413039+0000 mon.a (mon.0) 771 : cluster [DBG] Standby manager daemon y restarted 2026-03-09T18:29:32.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:31 vm00 bash[17468]: cluster 2026-03-09T18:29:31.413204+0000 mon.a (mon.0) 772 : cluster [DBG] Standby manager daemon y started 2026-03-09T18:29:32.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:31 vm00 bash[17468]: audit 2026-03-09T18:29:31.414887+0000 mon.b (mon.2) 61 : audit [DBG] from='mgr.? 192.168.123.100:0/1036488294' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-09T18:29:32.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:31 vm00 bash[17468]: audit 2026-03-09T18:29:31.416361+0000 mon.b (mon.2) 62 : audit [DBG] from='mgr.? 192.168.123.100:0/1036488294' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:29:32.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:31 vm00 bash[17468]: audit 2026-03-09T18:29:31.418652+0000 mon.b (mon.2) 63 : audit [DBG] from='mgr.? 192.168.123.100:0/1036488294' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-09T18:29:32.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:31 vm00 bash[17468]: audit 2026-03-09T18:29:31.419336+0000 mon.b (mon.2) 64 : audit [DBG] from='mgr.? 192.168.123.100:0/1036488294' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:29:32.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:31 vm00 bash[22468]: cluster 2026-03-09T18:29:30.668240+0000 mgr.x (mgr.24751) 15 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:29:32.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:31 vm00 bash[22468]: cluster 2026-03-09T18:29:30.739805+0000 mon.a (mon.0) 770 : cluster [DBG] mgrmap e25: x(active, since 5s), standbys: y 2026-03-09T18:29:32.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:31 vm00 bash[22468]: cluster 2026-03-09T18:29:31.413039+0000 mon.a (mon.0) 771 : cluster [DBG] Standby manager daemon y restarted 2026-03-09T18:29:32.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:31 vm00 bash[22468]: cluster 2026-03-09T18:29:31.413204+0000 mon.a (mon.0) 772 : cluster [DBG] Standby manager daemon y started 2026-03-09T18:29:32.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:31 vm00 bash[22468]: audit 2026-03-09T18:29:31.414887+0000 mon.b (mon.2) 61 : audit [DBG] from='mgr.? 192.168.123.100:0/1036488294' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-09T18:29:32.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:31 vm00 bash[22468]: audit 2026-03-09T18:29:31.416361+0000 mon.b (mon.2) 62 : audit [DBG] from='mgr.? 192.168.123.100:0/1036488294' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:29:32.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:31 vm00 bash[22468]: audit 2026-03-09T18:29:31.418652+0000 mon.b (mon.2) 63 : audit [DBG] from='mgr.? 192.168.123.100:0/1036488294' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-09T18:29:32.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:31 vm00 bash[22468]: audit 2026-03-09T18:29:31.419336+0000 mon.b (mon.2) 64 : audit [DBG] from='mgr.? 192.168.123.100:0/1036488294' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:29:32.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:31 vm08 bash[17774]: cluster 2026-03-09T18:29:30.668240+0000 mgr.x (mgr.24751) 15 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:29:32.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:31 vm08 bash[17774]: cluster 2026-03-09T18:29:30.739805+0000 mon.a (mon.0) 770 : cluster [DBG] mgrmap e25: x(active, since 5s), standbys: y 2026-03-09T18:29:32.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:31 vm08 bash[17774]: cluster 2026-03-09T18:29:31.413039+0000 mon.a (mon.0) 771 : cluster [DBG] Standby manager daemon y restarted 2026-03-09T18:29:32.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:31 vm08 bash[17774]: cluster 2026-03-09T18:29:31.413204+0000 mon.a (mon.0) 772 : cluster [DBG] Standby manager daemon y started 2026-03-09T18:29:32.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:31 vm08 bash[17774]: audit 2026-03-09T18:29:31.414887+0000 mon.b (mon.2) 61 : audit [DBG] from='mgr.? 192.168.123.100:0/1036488294' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-09T18:29:32.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:31 vm08 bash[17774]: audit 2026-03-09T18:29:31.416361+0000 mon.b (mon.2) 62 : audit [DBG] from='mgr.? 192.168.123.100:0/1036488294' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:29:32.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:31 vm08 bash[17774]: audit 2026-03-09T18:29:31.418652+0000 mon.b (mon.2) 63 : audit [DBG] from='mgr.? 192.168.123.100:0/1036488294' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-09T18:29:32.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:31 vm08 bash[17774]: audit 2026-03-09T18:29:31.419336+0000 mon.b (mon.2) 64 : audit [DBG] from='mgr.? 192.168.123.100:0/1036488294' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:29:33.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:33 vm08 bash[17774]: audit 2026-03-09T18:29:31.571947+0000 mgr.x (mgr.24751) 16 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:33.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:33 vm08 bash[17774]: cluster 2026-03-09T18:29:32.389748+0000 mon.a (mon.0) 773 : cluster [DBG] mgrmap e26: x(active, since 6s), standbys: y 2026-03-09T18:29:33.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:33 vm00 bash[17468]: audit 2026-03-09T18:29:31.571947+0000 mgr.x (mgr.24751) 16 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:33.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:33 vm00 bash[17468]: cluster 2026-03-09T18:29:32.389748+0000 mon.a (mon.0) 773 : cluster [DBG] mgrmap e26: x(active, since 6s), standbys: y 2026-03-09T18:29:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:33 vm00 bash[22468]: audit 2026-03-09T18:29:31.571947+0000 mgr.x (mgr.24751) 16 : audit [DBG] from='client.14694 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:33.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:33 vm00 bash[22468]: cluster 2026-03-09T18:29:32.389748+0000 mon.a (mon.0) 773 : cluster [DBG] mgrmap e26: x(active, since 6s), standbys: y 2026-03-09T18:29:33.880 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:33 vm00 bash[42815]: level=error ts=2026-03-09T18:29:33.529Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.100:8443//api/prometheus_receiver\": dial tcp 192.168.123.100:8443: connect: connection refused; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:29:33.880 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:33 vm00 bash[42815]: level=warn ts=2026-03-09T18:29:33.531Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.100:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.100 because it doesn't contain any IP SANs" 2026-03-09T18:29:33.880 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:33 vm00 bash[42815]: level=warn ts=2026-03-09T18:29:33.531Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.108:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.108 because it doesn't contain any IP SANs" 2026-03-09T18:29:34.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:34 vm08 bash[17774]: cluster 2026-03-09T18:29:32.668539+0000 mgr.x (mgr.24751) 17 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:29:34.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:34 vm08 bash[17774]: audit 2026-03-09T18:29:33.547642+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:34.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:34 vm08 bash[17774]: audit 2026-03-09T18:29:33.557480+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:34.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:34 vm08 bash[17774]: audit 2026-03-09T18:29:33.983186+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:34.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:34 vm08 bash[17774]: audit 2026-03-09T18:29:33.991575+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:34.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:34 vm08 bash[17774]: audit 2026-03-09T18:29:34.199047+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:34.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:34 vm08 bash[17774]: audit 2026-03-09T18:29:34.208538+0000 mon.b (mon.2) 65 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:29:34.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:34 vm08 bash[17774]: audit 2026-03-09T18:29:34.208983+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:34.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:34 vm08 bash[17774]: audit 2026-03-09T18:29:34.210861+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24751 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:29:34.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:34 vm00 bash[17468]: cluster 2026-03-09T18:29:32.668539+0000 mgr.x (mgr.24751) 17 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:29:34.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:34 vm00 bash[17468]: audit 2026-03-09T18:29:33.547642+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:34.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:34 vm00 bash[17468]: audit 2026-03-09T18:29:33.557480+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:34.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:34 vm00 bash[17468]: audit 2026-03-09T18:29:33.983186+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:34.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:34 vm00 bash[17468]: audit 2026-03-09T18:29:33.991575+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:34.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:34 vm00 bash[17468]: audit 2026-03-09T18:29:34.199047+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:34.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:34 vm00 bash[17468]: audit 2026-03-09T18:29:34.208538+0000 mon.b (mon.2) 65 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:29:34.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:34 vm00 bash[17468]: audit 2026-03-09T18:29:34.208983+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:34.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:34 vm00 bash[17468]: audit 2026-03-09T18:29:34.210861+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24751 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:29:34.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:34 vm00 bash[22468]: cluster 2026-03-09T18:29:32.668539+0000 mgr.x (mgr.24751) 17 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:29:34.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:34 vm00 bash[22468]: audit 2026-03-09T18:29:33.547642+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:34.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:34 vm00 bash[22468]: audit 2026-03-09T18:29:33.557480+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:34.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:34 vm00 bash[22468]: audit 2026-03-09T18:29:33.983186+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:34.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:34 vm00 bash[22468]: audit 2026-03-09T18:29:33.991575+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:34.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:34 vm00 bash[22468]: audit 2026-03-09T18:29:34.199047+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:34.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:34 vm00 bash[22468]: audit 2026-03-09T18:29:34.208538+0000 mon.b (mon.2) 65 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:29:34.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:34 vm00 bash[22468]: audit 2026-03-09T18:29:34.208983+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:34.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:34 vm00 bash[22468]: audit 2026-03-09T18:29:34.210861+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24751 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:35 vm00 bash[22468]: audit 2026-03-09T18:29:34.612352+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:35 vm00 bash[22468]: audit 2026-03-09T18:29:34.618451+0000 mon.b (mon.2) 66 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:35 vm00 bash[22468]: audit 2026-03-09T18:29:34.618940+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:35 vm00 bash[22468]: audit 2026-03-09T18:29:34.619779+0000 mon.b (mon.2) 67 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:35 vm00 bash[22468]: audit 2026-03-09T18:29:34.620504+0000 mon.b (mon.2) 68 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:35 vm00 bash[22468]: audit 2026-03-09T18:29:34.620810+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24751 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:35 vm00 bash[22468]: cephadm 2026-03-09T18:29:34.621340+0000 mgr.x (mgr.24751) 18 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:35 vm00 bash[22468]: cephadm 2026-03-09T18:29:34.621449+0000 mgr.x (mgr.24751) 19 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:35 vm00 bash[22468]: cephadm 2026-03-09T18:29:34.664105+0000 mgr.x (mgr.24751) 20 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:35 vm00 bash[22468]: cephadm 2026-03-09T18:29:34.664217+0000 mgr.x (mgr.24751) 21 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:35 vm00 bash[22468]: cluster 2026-03-09T18:29:34.669217+0000 mgr.x (mgr.24751) 22 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:35 vm00 bash[22468]: cephadm 2026-03-09T18:29:34.709020+0000 mgr.x (mgr.24751) 23 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:35 vm00 bash[22468]: cephadm 2026-03-09T18:29:34.709212+0000 mgr.x (mgr.24751) 24 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:35 vm00 bash[22468]: cephadm 2026-03-09T18:29:34.751959+0000 mgr.x (mgr.24751) 25 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:35 vm00 bash[22468]: cephadm 2026-03-09T18:29:34.754256+0000 mgr.x (mgr.24751) 26 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:35 vm00 bash[22468]: audit 2026-03-09T18:29:34.801696+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:35 vm00 bash[22468]: audit 2026-03-09T18:29:34.807386+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:35 vm00 bash[22468]: audit 2026-03-09T18:29:34.813101+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:35 vm00 bash[22468]: audit 2026-03-09T18:29:34.818463+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:35 vm00 bash[22468]: audit 2026-03-09T18:29:34.823050+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:35 vm00 bash[22468]: audit 2026-03-09T18:29:34.842534+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:35 vm00 bash[22468]: audit 2026-03-09T18:29:34.847500+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:35 vm00 bash[22468]: audit 2026-03-09T18:29:34.853569+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:35 vm00 bash[22468]: cephadm 2026-03-09T18:29:34.869241+0000 mgr.x (mgr.24751) 27 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:35 vm00 bash[22468]: audit 2026-03-09T18:29:34.870217+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:35 vm00 bash[22468]: cephadm 2026-03-09T18:29:34.874538+0000 mgr.x (mgr.24751) 28 : cephadm [INF] Deploying daemon alertmanager.a on vm00 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:35 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:29:35] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:35 vm00 bash[17468]: audit 2026-03-09T18:29:34.612352+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:35 vm00 bash[17468]: audit 2026-03-09T18:29:34.618451+0000 mon.b (mon.2) 66 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:35 vm00 bash[17468]: audit 2026-03-09T18:29:34.618940+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:35 vm00 bash[17468]: audit 2026-03-09T18:29:34.619779+0000 mon.b (mon.2) 67 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:35 vm00 bash[17468]: audit 2026-03-09T18:29:34.620504+0000 mon.b (mon.2) 68 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:35 vm00 bash[17468]: audit 2026-03-09T18:29:34.620810+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24751 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:35 vm00 bash[17468]: cephadm 2026-03-09T18:29:34.621340+0000 mgr.x (mgr.24751) 18 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:35 vm00 bash[17468]: cephadm 2026-03-09T18:29:34.621449+0000 mgr.x (mgr.24751) 19 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:35 vm00 bash[17468]: cephadm 2026-03-09T18:29:34.664105+0000 mgr.x (mgr.24751) 20 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:35 vm00 bash[17468]: cephadm 2026-03-09T18:29:34.664217+0000 mgr.x (mgr.24751) 21 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:35 vm00 bash[17468]: cluster 2026-03-09T18:29:34.669217+0000 mgr.x (mgr.24751) 22 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:35 vm00 bash[17468]: cephadm 2026-03-09T18:29:34.709020+0000 mgr.x (mgr.24751) 23 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:35 vm00 bash[17468]: cephadm 2026-03-09T18:29:34.709212+0000 mgr.x (mgr.24751) 24 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:35 vm00 bash[17468]: cephadm 2026-03-09T18:29:34.751959+0000 mgr.x (mgr.24751) 25 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:35 vm00 bash[17468]: cephadm 2026-03-09T18:29:34.754256+0000 mgr.x (mgr.24751) 26 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:35 vm00 bash[17468]: audit 2026-03-09T18:29:34.801696+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:35 vm00 bash[17468]: audit 2026-03-09T18:29:34.807386+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:35 vm00 bash[17468]: audit 2026-03-09T18:29:34.813101+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:35 vm00 bash[17468]: audit 2026-03-09T18:29:34.818463+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:35 vm00 bash[17468]: audit 2026-03-09T18:29:34.823050+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:35 vm00 bash[17468]: audit 2026-03-09T18:29:34.842534+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:35 vm00 bash[17468]: audit 2026-03-09T18:29:34.847500+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:35 vm00 bash[17468]: audit 2026-03-09T18:29:34.853569+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:35 vm00 bash[17468]: cephadm 2026-03-09T18:29:34.869241+0000 mgr.x (mgr.24751) 27 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T18:29:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:35 vm00 bash[17468]: audit 2026-03-09T18:29:34.870217+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:35 vm00 bash[17468]: cephadm 2026-03-09T18:29:34.874538+0000 mgr.x (mgr.24751) 28 : cephadm [INF] Deploying daemon alertmanager.a on vm00 2026-03-09T18:29:35.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:35 vm08 bash[17774]: audit 2026-03-09T18:29:34.612352+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:35 vm08 bash[17774]: audit 2026-03-09T18:29:34.618451+0000 mon.b (mon.2) 66 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:29:35.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:35 vm08 bash[17774]: audit 2026-03-09T18:29:34.618940+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:35 vm08 bash[17774]: audit 2026-03-09T18:29:34.619779+0000 mon.b (mon.2) 67 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:29:35.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:35 vm08 bash[17774]: audit 2026-03-09T18:29:34.620504+0000 mon.b (mon.2) 68 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:29:35.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:35 vm08 bash[17774]: audit 2026-03-09T18:29:34.620810+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24751 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:29:35.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:35 vm08 bash[17774]: cephadm 2026-03-09T18:29:34.621340+0000 mgr.x (mgr.24751) 18 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:29:35.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:35 vm08 bash[17774]: cephadm 2026-03-09T18:29:34.621449+0000 mgr.x (mgr.24751) 19 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T18:29:35.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:35 vm08 bash[17774]: cephadm 2026-03-09T18:29:34.664105+0000 mgr.x (mgr.24751) 20 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:29:35.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:35 vm08 bash[17774]: cephadm 2026-03-09T18:29:34.664217+0000 mgr.x (mgr.24751) 21 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:29:35.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:35 vm08 bash[17774]: cluster 2026-03-09T18:29:34.669217+0000 mgr.x (mgr.24751) 22 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T18:29:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:35 vm08 bash[17774]: cephadm 2026-03-09T18:29:34.709020+0000 mgr.x (mgr.24751) 23 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:29:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:35 vm08 bash[17774]: cephadm 2026-03-09T18:29:34.709212+0000 mgr.x (mgr.24751) 24 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:29:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:35 vm08 bash[17774]: cephadm 2026-03-09T18:29:34.751959+0000 mgr.x (mgr.24751) 25 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:29:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:35 vm08 bash[17774]: cephadm 2026-03-09T18:29:34.754256+0000 mgr.x (mgr.24751) 26 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:29:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:35 vm08 bash[17774]: audit 2026-03-09T18:29:34.801696+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:35 vm08 bash[17774]: audit 2026-03-09T18:29:34.807386+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:35 vm08 bash[17774]: audit 2026-03-09T18:29:34.813101+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:35 vm08 bash[17774]: audit 2026-03-09T18:29:34.818463+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:35 vm08 bash[17774]: audit 2026-03-09T18:29:34.823050+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:35 vm08 bash[17774]: audit 2026-03-09T18:29:34.842534+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:35 vm08 bash[17774]: audit 2026-03-09T18:29:34.847500+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:35 vm08 bash[17774]: audit 2026-03-09T18:29:34.853569+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:35 vm08 bash[17774]: cephadm 2026-03-09T18:29:34.869241+0000 mgr.x (mgr.24751) 27 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T18:29:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:35 vm08 bash[17774]: audit 2026-03-09T18:29:34.870217+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:35 vm08 bash[17774]: cephadm 2026-03-09T18:29:34.874538+0000 mgr.x (mgr.24751) 28 : cephadm [INF] Deploying daemon alertmanager.a on vm00 2026-03-09T18:29:38.381 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:37 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:29:37] "GET /metrics HTTP/1.1" 200 34728 "" "Prometheus/2.33.4" 2026-03-09T18:29:38.644 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:38 vm00 bash[22468]: cluster 2026-03-09T18:29:36.669616+0000 mgr.x (mgr.24751) 29 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T18:29:38.645 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:38 vm00 bash[17468]: cluster 2026-03-09T18:29:36.669616+0000 mgr.x (mgr.24751) 29 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T18:29:38.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:38 vm08 bash[17774]: cluster 2026-03-09T18:29:36.669616+0000 mgr.x (mgr.24751) 29 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T18:29:39.243 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:29:38 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:39.243 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:38 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:39.243 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:38 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:39.243 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:38 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:39.243 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:29:38 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:39.244 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:29:38 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:39.244 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:29:38 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:39.244 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:38 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:39.244 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:38 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:39.244 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:39 vm00 systemd[1]: Stopping Ceph alertmanager.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:29:39.244 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:39 vm00 bash[42815]: level=info ts=2026-03-09T18:29:39.049Z caller=main.go:557 msg="Received SIGTERM, exiting gracefully..." 2026-03-09T18:29:39.244 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:39 vm00 bash[50835]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-alertmanager-a 2026-03-09T18:29:39.244 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:39 vm00 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@alertmanager.a.service: Deactivated successfully. 2026-03-09T18:29:39.244 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:39 vm00 systemd[1]: Stopped Ceph alertmanager.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:29:39.539 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:39 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:39.539 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:29:39 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:39.539 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:39 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:39.539 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:29:39 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:39.539 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:39 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:39.539 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:29:39 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:39.539 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:39 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:39.540 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:29:39 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:39.540 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:39 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:39.540 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:39 vm00 systemd[1]: Started Ceph alertmanager.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:29:39.791 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:39 vm00 bash[50953]: ts=2026-03-09T18:29:39.541Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)" 2026-03-09T18:29:39.791 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:39 vm00 bash[50953]: ts=2026-03-09T18:29:39.543Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)" 2026-03-09T18:29:39.791 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:39 vm00 bash[50953]: ts=2026-03-09T18:29:39.546Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.123.100 port=9094 2026-03-09T18:29:39.791 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:39 vm00 bash[50953]: ts=2026-03-09T18:29:39.547Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-09T18:29:39.791 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:39 vm00 bash[50953]: ts=2026-03-09T18:29:39.576Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T18:29:39.791 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:39 vm00 bash[50953]: ts=2026-03-09T18:29:39.577Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T18:29:39.791 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:39 vm00 bash[50953]: ts=2026-03-09T18:29:39.579Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9093 2026-03-09T18:29:39.791 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:39 vm00 bash[50953]: ts=2026-03-09T18:29:39.579Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9093 2026-03-09T18:29:40.102 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:39 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:40.102 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:39 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:40.102 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:39 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:40.102 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:29:39 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:40.102 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:29:39 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:40.102 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:29:39 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:40.103 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:29:39 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:40.103 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:39 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:40.103 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:39 vm00 systemd[1]: Stopping Ceph node-exporter.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:29:40.103 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:39 vm00 bash[51076]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-node-exporter-a 2026-03-09T18:29:40.103 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:39 vm00 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@node-exporter.a.service: Main process exited, code=exited, status=143/n/a 2026-03-09T18:29:40.103 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:39 vm00 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@node-exporter.a.service: Failed with result 'exit-code'. 2026-03-09T18:29:40.103 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:39 vm00 systemd[1]: Stopped Ceph node-exporter.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:29:40.103 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:39 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:40.369 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:40 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:40.369 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:40 vm00 bash[17468]: cluster 2026-03-09T18:29:38.670150+0000 mgr.x (mgr.24751) 30 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:29:40.369 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:40 vm00 bash[17468]: audit 2026-03-09T18:29:39.370091+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:40.369 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:40 vm00 bash[17468]: cephadm 2026-03-09T18:29:39.378250+0000 mgr.x (mgr.24751) 31 : cephadm [INF] Reconfiguring node-exporter.a (dependencies changed)... 2026-03-09T18:29:40.369 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:40 vm00 bash[17468]: cephadm 2026-03-09T18:29:39.378764+0000 mgr.x (mgr.24751) 32 : cephadm [INF] Deploying daemon node-exporter.a on vm00 2026-03-09T18:29:40.369 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:40 vm00 bash[17468]: audit 2026-03-09T18:29:39.379093+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:40.369 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:40 vm00 bash[17468]: audit 2026-03-09T18:29:40.228270+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:40.369 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:40 vm00 bash[17468]: audit 2026-03-09T18:29:40.233916+0000 mon.b (mon.2) 69 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:29:40.369 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:40 vm00 bash[17468]: audit 2026-03-09T18:29:40.234018+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:40.369 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:40 vm00 bash[17468]: audit 2026-03-09T18:29:40.236257+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.24751 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:29:40.369 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:40 vm00 bash[17468]: audit 2026-03-09T18:29:40.237178+0000 mon.b (mon.2) 70 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:29:40.369 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:40 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:40.370 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:29:40 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:40.370 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:29:40 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:40.370 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:29:40 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:40.370 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:29:40 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:40.370 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:40 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:40.370 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:40 vm00 systemd[1]: Started Ceph node-exporter.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:29:40.370 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:40 vm00 bash[51187]: Unable to find image 'quay.io/prometheus/node-exporter:v1.7.0' locally 2026-03-09T18:29:40.370 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:40 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:40.370 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:40 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:40.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:40 vm00 bash[22468]: cluster 2026-03-09T18:29:38.670150+0000 mgr.x (mgr.24751) 30 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:29:40.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:40 vm00 bash[22468]: audit 2026-03-09T18:29:39.370091+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:40.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:40 vm00 bash[22468]: cephadm 2026-03-09T18:29:39.378250+0000 mgr.x (mgr.24751) 31 : cephadm [INF] Reconfiguring node-exporter.a (dependencies changed)... 2026-03-09T18:29:40.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:40 vm00 bash[22468]: cephadm 2026-03-09T18:29:39.378764+0000 mgr.x (mgr.24751) 32 : cephadm [INF] Deploying daemon node-exporter.a on vm00 2026-03-09T18:29:40.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:40 vm00 bash[22468]: audit 2026-03-09T18:29:39.379093+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:40.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:40 vm00 bash[22468]: audit 2026-03-09T18:29:40.228270+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:40.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:40 vm00 bash[22468]: audit 2026-03-09T18:29:40.233916+0000 mon.b (mon.2) 69 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:29:40.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:40 vm00 bash[22468]: audit 2026-03-09T18:29:40.234018+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:40.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:40 vm00 bash[22468]: audit 2026-03-09T18:29:40.236257+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.24751 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:29:40.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:40 vm00 bash[22468]: audit 2026-03-09T18:29:40.237178+0000 mon.b (mon.2) 70 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:29:40.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:40 vm08 bash[17774]: cluster 2026-03-09T18:29:38.670150+0000 mgr.x (mgr.24751) 30 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:29:40.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:40 vm08 bash[17774]: audit 2026-03-09T18:29:39.370091+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:40.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:40 vm08 bash[17774]: cephadm 2026-03-09T18:29:39.378250+0000 mgr.x (mgr.24751) 31 : cephadm [INF] Reconfiguring node-exporter.a (dependencies changed)... 2026-03-09T18:29:40.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:40 vm08 bash[17774]: cephadm 2026-03-09T18:29:39.378764+0000 mgr.x (mgr.24751) 32 : cephadm [INF] Deploying daemon node-exporter.a on vm00 2026-03-09T18:29:40.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:40 vm08 bash[17774]: audit 2026-03-09T18:29:39.379093+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:40.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:40 vm08 bash[17774]: audit 2026-03-09T18:29:40.228270+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:40.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:40 vm08 bash[17774]: audit 2026-03-09T18:29:40.233916+0000 mon.b (mon.2) 69 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:29:40.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:40 vm08 bash[17774]: audit 2026-03-09T18:29:40.234018+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:40.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:40 vm08 bash[17774]: audit 2026-03-09T18:29:40.236257+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.24751 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:29:40.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:40 vm08 bash[17774]: audit 2026-03-09T18:29:40.237178+0000 mon.b (mon.2) 70 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:29:41.418 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:41 vm08 bash[17774]: cephadm 2026-03-09T18:29:40.233610+0000 mgr.x (mgr.24751) 33 : cephadm [INF] Reconfiguring iscsi.foo.vm00.ywhulq (dependencies changed)... 2026-03-09T18:29:41.418 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:41 vm08 bash[17774]: cephadm 2026-03-09T18:29:40.237909+0000 mgr.x (mgr.24751) 34 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm00.ywhulq on vm00 2026-03-09T18:29:41.418 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:41 vm08 bash[17774]: audit 2026-03-09T18:29:40.803295+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:41.418 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:41 vm08 bash[17774]: audit 2026-03-09T18:29:40.813870+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:41.418 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:41 vm08 bash[17774]: audit 2026-03-09T18:29:40.852670+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:41.418 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:41 vm08 bash[17774]: audit 2026-03-09T18:29:40.861726+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:41.419 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:41 vm08 bash[17774]: audit 2026-03-09T18:29:40.862675+0000 mon.b (mon.2) 71 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T18:29:41.419 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:41 vm08 bash[17774]: audit 2026-03-09T18:29:41.321391+0000 mon.a (mon.0) 802 : audit [DBG] from='client.? 192.168.123.100:0/2701567172' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T18:29:41.419 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 systemd[1]: Stopping Ceph grafana.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:29:41.419 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37801]: Error response from daemon: No such container: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-grafana.a 2026-03-09T18:29:41.419 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[33398]: t=2026-03-09T18:29:41+0000 lvl=info msg="Shutdown started" logger=server reason="System signal: terminated" 2026-03-09T18:29:41.592 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:41 vm00 bash[22468]: cephadm 2026-03-09T18:29:40.233610+0000 mgr.x (mgr.24751) 33 : cephadm [INF] Reconfiguring iscsi.foo.vm00.ywhulq (dependencies changed)... 2026-03-09T18:29:41.592 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:41 vm00 bash[22468]: cephadm 2026-03-09T18:29:40.237909+0000 mgr.x (mgr.24751) 34 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm00.ywhulq on vm00 2026-03-09T18:29:41.592 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:41 vm00 bash[22468]: audit 2026-03-09T18:29:40.803295+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:41.592 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:41 vm00 bash[22468]: audit 2026-03-09T18:29:40.813870+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:41.592 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:41 vm00 bash[22468]: audit 2026-03-09T18:29:40.852670+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:41.592 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:41 vm00 bash[22468]: audit 2026-03-09T18:29:40.861726+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:41.592 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:41 vm00 bash[22468]: audit 2026-03-09T18:29:40.862675+0000 mon.b (mon.2) 71 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T18:29:41.592 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:41 vm00 bash[22468]: audit 2026-03-09T18:29:41.321391+0000 mon.a (mon.0) 802 : audit [DBG] from='client.? 192.168.123.100:0/2701567172' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T18:29:41.592 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:41 vm00 bash[17468]: cephadm 2026-03-09T18:29:40.233610+0000 mgr.x (mgr.24751) 33 : cephadm [INF] Reconfiguring iscsi.foo.vm00.ywhulq (dependencies changed)... 2026-03-09T18:29:41.592 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:41 vm00 bash[17468]: cephadm 2026-03-09T18:29:40.237909+0000 mgr.x (mgr.24751) 34 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm00.ywhulq on vm00 2026-03-09T18:29:41.592 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:41 vm00 bash[17468]: audit 2026-03-09T18:29:40.803295+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:41.592 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:41 vm00 bash[17468]: audit 2026-03-09T18:29:40.813870+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:41.592 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:41 vm00 bash[17468]: audit 2026-03-09T18:29:40.852670+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:41.592 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:41 vm00 bash[17468]: audit 2026-03-09T18:29:40.861726+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:41.592 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:41 vm00 bash[17468]: audit 2026-03-09T18:29:40.862675+0000 mon.b (mon.2) 71 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T18:29:41.593 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:41 vm00 bash[17468]: audit 2026-03-09T18:29:41.321391+0000 mon.a (mon.0) 802 : audit [DBG] from='client.? 192.168.123.100:0/2701567172' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T18:29:41.593 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:41 vm00 bash[50953]: ts=2026-03-09T18:29:41.548Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.00038386s 2026-03-09T18:29:41.672 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[33398]: t=2026-03-09T18:29:41+0000 lvl=info msg="Database locked, sleeping then retrying" logger=sqlstore error="database is locked" retry=0 2026-03-09T18:29:41.672 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37808]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-grafana-a 2026-03-09T18:29:41.672 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37843]: Error response from daemon: No such container: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-grafana.a 2026-03-09T18:29:41.672 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@grafana.a.service: Deactivated successfully. 2026-03-09T18:29:41.672 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 systemd[1]: Stopped Ceph grafana.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:29:41.672 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 systemd[1]: Started Ceph grafana.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:29:41.672 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=info msg="The state of unified alerting is still not defined. The decision will be made during as we run the database migrations" logger=settings 2026-03-09T18:29:41.672 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=warn msg="falling back to legacy setting of 'min_interval_seconds'; please use the configuration option in the `unified_alerting` section if Grafana 8 alerts are enabled." logger=settings 2026-03-09T18:29:41.672 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=info msg="Config loaded from" logger=settings file=/usr/share/grafana/conf/defaults.ini 2026-03-09T18:29:41.672 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=info msg="Config loaded from" logger=settings file=/etc/grafana/grafana.ini 2026-03-09T18:29:41.673 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_DATA=/var/lib/grafana" 2026-03-09T18:29:41.673 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_LOGS=/var/log/grafana" 2026-03-09T18:29:41.673 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 2026-03-09T18:29:41.673 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 2026-03-09T18:29:41.673 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=info msg="Path Home" logger=settings path=/usr/share/grafana 2026-03-09T18:29:41.673 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=info msg="Path Data" logger=settings path=/var/lib/grafana 2026-03-09T18:29:41.673 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=info msg="Path Logs" logger=settings path=/var/log/grafana 2026-03-09T18:29:41.673 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=info msg="Path Plugins" logger=settings path=/var/lib/grafana/plugins 2026-03-09T18:29:41.673 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=info msg="Path Provisioning" logger=settings path=/etc/grafana/provisioning 2026-03-09T18:29:41.673 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=info msg="App mode production" logger=settings 2026-03-09T18:29:41.673 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=info msg="Connecting to DB" logger=sqlstore dbtype=sqlite3 2026-03-09T18:29:41.673 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=warn msg="SQLite database file has broader permissions than it should" logger=sqlstore path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r----- 2026-03-09T18:29:41.673 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=info msg="Starting DB migrations" logger=migrator 2026-03-09T18:29:41.673 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=info msg="migrations completed" logger=migrator performed=0 skipped=377 duration=543.187µs 2026-03-09T18:29:41.673 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=info msg="Created default organization" logger=sqlstore 2026-03-09T18:29:41.673 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=info msg="Initialising plugins" logger=plugin.manager 2026-03-09T18:29:41.673 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=info msg="Plugin registered" logger=plugin.manager pluginId=input 2026-03-09T18:29:41.880 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:41 vm00 bash[51187]: v1.7.0: Pulling from prometheus/node-exporter 2026-03-09T18:29:41.960 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:29:41 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:41.961 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:41 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:41.961 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:29:41 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:41.961 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:29:41 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:41.961 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:29:41 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:41.961 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:41 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:41.961 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:41 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:41.961 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:41 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:41.961 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=info msg="Plugin registered" logger=plugin.manager pluginId=grafana-piechart-panel 2026-03-09T18:29:41.961 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=info msg="Plugin registered" logger=plugin.manager pluginId=vonage-status-panel 2026-03-09T18:29:41.961 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=info msg="Live Push Gateway initialization" logger=live.push_http 2026-03-09T18:29:41.961 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=info msg="deleted datasource based on configuration" logger=provisioning.datasources name=Dashboard1 2026-03-09T18:29:41.961 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=info msg="inserting datasource from configuration " logger=provisioning.datasources name=Dashboard1 uid=P43CA22E17D0F9596 2026-03-09T18:29:41.961 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=info msg="inserting datasource from configuration " logger=provisioning.datasources name=Loki uid=P8E80F9AEF21F6940 2026-03-09T18:29:41.961 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=info msg="HTTP Server Listen" logger=http.server address=[::]:3000 protocol=https subUrl= socket= 2026-03-09T18:29:41.961 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=info msg="warming cache for startup" logger=ngalert 2026-03-09T18:29:41.961 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 bash[37867]: t=2026-03-09T18:29:41+0000 lvl=info msg="starting MultiOrg Alertmanager" logger=ngalert.multiorg.alertmanager 2026-03-09T18:29:41.961 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:41 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:42.224 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:29:42 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:42.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:42 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:42.224 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:29:42 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:42.224 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:29:42 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:42.225 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:29:42 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:42.225 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:42 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:42.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:41 vm08 systemd[1]: Stopping Ceph node-exporter.b for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:29:42.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:42 vm08 bash[37984]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-node-exporter-b 2026-03-09T18:29:42.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:42 vm08 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@node-exporter.b.service: Main process exited, code=exited, status=143/n/a 2026-03-09T18:29:42.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:42 vm08 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@node-exporter.b.service: Failed with result 'exit-code'. 2026-03-09T18:29:42.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:42 vm08 systemd[1]: Stopped Ceph node-exporter.b for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:29:42.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:42 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:42.225 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:42 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:42.225 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:42 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:42.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:41 vm00 bash[51187]: 2abcce694348: Pulling fs layer 2026-03-09T18:29:42.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:41 vm00 bash[51187]: 455fd88e5221: Pulling fs layer 2026-03-09T18:29:42.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:41 vm00 bash[51187]: 324153f2810a: Pulling fs layer 2026-03-09T18:29:42.679 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[17468]: cluster 2026-03-09T18:29:40.670487+0000 mgr.x (mgr.24751) 35 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T18:29:42.680 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[17468]: cephadm 2026-03-09T18:29:40.813019+0000 mgr.x (mgr.24751) 36 : cephadm [INF] Reconfiguring grafana.a (dependencies changed)... 2026-03-09T18:29:42.680 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[17468]: cephadm 2026-03-09T18:29:40.817911+0000 mgr.x (mgr.24751) 37 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T18:29:42.680 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[17468]: audit 2026-03-09T18:29:40.863180+0000 mgr.x (mgr.24751) 38 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T18:29:42.680 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[17468]: cephadm 2026-03-09T18:29:40.866109+0000 mgr.x (mgr.24751) 39 : cephadm [INF] Reconfiguring daemon grafana.a on vm08 2026-03-09T18:29:42.680 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[17468]: audit 2026-03-09T18:29:41.489288+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:42.680 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[17468]: audit 2026-03-09T18:29:41.498107+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:42.680 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[17468]: audit 2026-03-09T18:29:41.541906+0000 mon.a (mon.0) 805 : audit [INF] from='client.? 192.168.123.100:0/1724117691' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2239094263"}]: dispatch 2026-03-09T18:29:42.680 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[17468]: audit 2026-03-09T18:29:42.095390+0000 mon.b (mon.2) 72 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:29:42.680 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[17468]: audit 2026-03-09T18:29:42.307363+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:42.680 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[17468]: audit 2026-03-09T18:29:42.315008+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:42.680 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:42 vm00 bash[22468]: cluster 2026-03-09T18:29:40.670487+0000 mgr.x (mgr.24751) 35 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T18:29:42.680 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:42 vm00 bash[22468]: cephadm 2026-03-09T18:29:40.813019+0000 mgr.x (mgr.24751) 36 : cephadm [INF] Reconfiguring grafana.a (dependencies changed)... 2026-03-09T18:29:42.680 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:42 vm00 bash[22468]: cephadm 2026-03-09T18:29:40.817911+0000 mgr.x (mgr.24751) 37 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T18:29:42.680 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:42 vm00 bash[22468]: audit 2026-03-09T18:29:40.863180+0000 mgr.x (mgr.24751) 38 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T18:29:42.680 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:42 vm00 bash[22468]: cephadm 2026-03-09T18:29:40.866109+0000 mgr.x (mgr.24751) 39 : cephadm [INF] Reconfiguring daemon grafana.a on vm08 2026-03-09T18:29:42.680 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:42 vm00 bash[22468]: audit 2026-03-09T18:29:41.489288+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:42.680 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:42 vm00 bash[22468]: audit 2026-03-09T18:29:41.498107+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:42.680 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:42 vm00 bash[22468]: audit 2026-03-09T18:29:41.541906+0000 mon.a (mon.0) 805 : audit [INF] from='client.? 192.168.123.100:0/1724117691' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2239094263"}]: dispatch 2026-03-09T18:29:42.680 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:42 vm00 bash[22468]: audit 2026-03-09T18:29:42.095390+0000 mon.b (mon.2) 72 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:29:42.680 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:42 vm00 bash[22468]: audit 2026-03-09T18:29:42.307363+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:42.680 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:42 vm00 bash[22468]: audit 2026-03-09T18:29:42.315008+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:42.680 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: 2abcce694348: Verifying Checksum 2026-03-09T18:29:42.680 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: 2abcce694348: Download complete 2026-03-09T18:29:42.681 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: 455fd88e5221: Verifying Checksum 2026-03-09T18:29:42.681 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: 455fd88e5221: Download complete 2026-03-09T18:29:42.681 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: 2abcce694348: Pull complete 2026-03-09T18:29:42.681 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: 324153f2810a: Verifying Checksum 2026-03-09T18:29:42.681 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: 324153f2810a: Download complete 2026-03-09T18:29:42.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:42 vm08 bash[17774]: cluster 2026-03-09T18:29:40.670487+0000 mgr.x (mgr.24751) 35 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T18:29:42.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:42 vm08 bash[17774]: cephadm 2026-03-09T18:29:40.813019+0000 mgr.x (mgr.24751) 36 : cephadm [INF] Reconfiguring grafana.a (dependencies changed)... 2026-03-09T18:29:42.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:42 vm08 bash[17774]: cephadm 2026-03-09T18:29:40.817911+0000 mgr.x (mgr.24751) 37 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T18:29:42.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:42 vm08 bash[17774]: audit 2026-03-09T18:29:40.863180+0000 mgr.x (mgr.24751) 38 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T18:29:42.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:42 vm08 bash[17774]: cephadm 2026-03-09T18:29:40.866109+0000 mgr.x (mgr.24751) 39 : cephadm [INF] Reconfiguring daemon grafana.a on vm08 2026-03-09T18:29:42.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:42 vm08 bash[17774]: audit 2026-03-09T18:29:41.489288+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:42.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:42 vm08 bash[17774]: audit 2026-03-09T18:29:41.498107+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:42.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:42 vm08 bash[17774]: audit 2026-03-09T18:29:41.541906+0000 mon.a (mon.0) 805 : audit [INF] from='client.? 192.168.123.100:0/1724117691' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2239094263"}]: dispatch 2026-03-09T18:29:42.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:42 vm08 bash[17774]: audit 2026-03-09T18:29:42.095390+0000 mon.b (mon.2) 72 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:29:42.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:42 vm08 bash[17774]: audit 2026-03-09T18:29:42.307363+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:42.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:42 vm08 bash[17774]: audit 2026-03-09T18:29:42.315008+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:42.725 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:42 vm08 bash[33963]: ts=2026-03-09T18:29:42.314Z caller=manager.go:609 level=warn component="rule manager" group=pools msg="Evaluating rule failed" rule="alert: CephPoolGrowthWarning\nexpr: (predict_linear(ceph_pool_percent_used[2d], 3600 * 24 * 5) * on(pool_id) group_right()\n ceph_pool_metadata) >= 95\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.9.2\n severity: warning\n type: ceph_default\nannotations:\n description: |\n Pool '{{ $labels.name }}' will be full in less than 5 days assuming the average fill-up rate of the past 48 hours.\n summary: Pool growth rate may soon exceed it's capacity\n" err="found duplicate series for the match group {pool_id=\"1\"} on the left hand-side of the operation: [{instance=\"192.168.123.108:9283\", job=\"ceph\", pool_id=\"1\"}, {instance=\"192.168.123.100:9283\", job=\"ceph\", pool_id=\"1\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:29:42.725 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:42 vm08 systemd[1]: Started Ceph node-exporter.b for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:29:42.725 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:42 vm08 bash[38095]: Unable to find image 'quay.io/prometheus/node-exporter:v1.7.0' locally 2026-03-09T18:29:42.945 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: 455fd88e5221: Pull complete 2026-03-09T18:29:42.945 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: 324153f2810a: Pull complete 2026-03-09T18:29:42.945 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: Digest: sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80 2026-03-09T18:29:42.945 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.7.0 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.947Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.947Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.947Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.947Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.948Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.948Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=arp 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=edac 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-09T18:29:43.380 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-09T18:29:43.381 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-09T18:29:43.381 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-09T18:29:43.381 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-09T18:29:43.381 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=os 2026-03-09T18:29:43.381 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-09T18:29:43.381 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-09T18:29:43.381 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-09T18:29:43.381 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-09T18:29:43.381 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-09T18:29:43.381 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-09T18:29:43.381 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-09T18:29:43.381 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=stat 2026-03-09T18:29:43.381 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-09T18:29:43.381 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-09T18:29:43.381 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-09T18:29:43.381 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=time 2026-03-09T18:29:43.381 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-09T18:29:43.381 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=uname 2026-03-09T18:29:43.381 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-09T18:29:43.381 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-09T18:29:43.381 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-09T18:29:43.381 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-09T18:29:43.381 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:29:42 vm00 bash[51187]: ts=2026-03-09T18:29:42.949Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-09T18:29:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:43 vm08 bash[17774]: cephadm 2026-03-09T18:29:41.499002+0000 mgr.x (mgr.24751) 40 : cephadm [INF] Reconfiguring node-exporter.b (dependencies changed)... 2026-03-09T18:29:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:43 vm08 bash[17774]: cephadm 2026-03-09T18:29:41.499308+0000 mgr.x (mgr.24751) 41 : cephadm [INF] Deploying daemon node-exporter.b on vm08 2026-03-09T18:29:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:43 vm08 bash[17774]: cephadm 2026-03-09T18:29:42.317092+0000 mgr.x (mgr.24751) 42 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T18:29:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:43 vm08 bash[17774]: audit 2026-03-09T18:29:42.505931+0000 mon.a (mon.0) 808 : audit [INF] from='client.? 192.168.123.100:0/1724117691' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2239094263"}]': finished 2026-03-09T18:29:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:43 vm08 bash[17774]: cluster 2026-03-09T18:29:42.505981+0000 mon.a (mon.0) 809 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-09T18:29:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:43 vm08 bash[17774]: audit 2026-03-09T18:29:42.776333+0000 mon.a (mon.0) 810 : audit [INF] from='client.? 192.168.123.100:0/365533971' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/2237172914"}]: dispatch 2026-03-09T18:29:43.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:43 vm00 bash[17468]: cephadm 2026-03-09T18:29:41.499002+0000 mgr.x (mgr.24751) 40 : cephadm [INF] Reconfiguring node-exporter.b (dependencies changed)... 2026-03-09T18:29:43.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:43 vm00 bash[17468]: cephadm 2026-03-09T18:29:41.499308+0000 mgr.x (mgr.24751) 41 : cephadm [INF] Deploying daemon node-exporter.b on vm08 2026-03-09T18:29:43.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:43 vm00 bash[17468]: cephadm 2026-03-09T18:29:42.317092+0000 mgr.x (mgr.24751) 42 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T18:29:43.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:43 vm00 bash[17468]: audit 2026-03-09T18:29:42.505931+0000 mon.a (mon.0) 808 : audit [INF] from='client.? 192.168.123.100:0/1724117691' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2239094263"}]': finished 2026-03-09T18:29:43.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:43 vm00 bash[17468]: cluster 2026-03-09T18:29:42.505981+0000 mon.a (mon.0) 809 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-09T18:29:43.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:43 vm00 bash[17468]: audit 2026-03-09T18:29:42.776333+0000 mon.a (mon.0) 810 : audit [INF] from='client.? 192.168.123.100:0/365533971' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/2237172914"}]: dispatch 2026-03-09T18:29:43.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:43 vm00 bash[22468]: cephadm 2026-03-09T18:29:41.499002+0000 mgr.x (mgr.24751) 40 : cephadm [INF] Reconfiguring node-exporter.b (dependencies changed)... 2026-03-09T18:29:43.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:43 vm00 bash[22468]: cephadm 2026-03-09T18:29:41.499308+0000 mgr.x (mgr.24751) 41 : cephadm [INF] Deploying daemon node-exporter.b on vm08 2026-03-09T18:29:43.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:43 vm00 bash[22468]: cephadm 2026-03-09T18:29:42.317092+0000 mgr.x (mgr.24751) 42 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T18:29:43.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:43 vm00 bash[22468]: audit 2026-03-09T18:29:42.505931+0000 mon.a (mon.0) 808 : audit [INF] from='client.? 192.168.123.100:0/1724117691' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2239094263"}]': finished 2026-03-09T18:29:43.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:43 vm00 bash[22468]: cluster 2026-03-09T18:29:42.505981+0000 mon.a (mon.0) 809 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-09T18:29:43.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:43 vm00 bash[22468]: audit 2026-03-09T18:29:42.776333+0000 mon.a (mon.0) 810 : audit [INF] from='client.? 192.168.123.100:0/365533971' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/2237172914"}]: dispatch 2026-03-09T18:29:44.128 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:43 vm08 bash[38095]: v1.7.0: Pulling from prometheus/node-exporter 2026-03-09T18:29:44.395 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:44 vm08 bash[38095]: 2abcce694348: Pulling fs layer 2026-03-09T18:29:44.395 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:44 vm08 bash[38095]: 455fd88e5221: Pulling fs layer 2026-03-09T18:29:44.395 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:44 vm08 bash[38095]: 324153f2810a: Pulling fs layer 2026-03-09T18:29:44.651 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:44 vm08 bash[17774]: cephadm 2026-03-09T18:29:42.474100+0000 mgr.x (mgr.24751) 43 : cephadm [INF] Deploying daemon prometheus.a on vm08 2026-03-09T18:29:44.651 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:44 vm08 bash[17774]: cluster 2026-03-09T18:29:42.670809+0000 mgr.x (mgr.24751) 44 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 19 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T18:29:44.651 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:44 vm08 bash[17774]: audit 2026-03-09T18:29:43.517535+0000 mon.a (mon.0) 811 : audit [INF] from='client.? 192.168.123.100:0/365533971' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/2237172914"}]': finished 2026-03-09T18:29:44.652 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:44 vm08 bash[17774]: cluster 2026-03-09T18:29:43.517564+0000 mon.a (mon.0) 812 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-09T18:29:44.652 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:44 vm08 bash[17774]: audit 2026-03-09T18:29:43.719628+0000 mon.a (mon.0) 813 : audit [INF] from='client.? 192.168.123.100:0/4066181341' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/2237172914"}]: dispatch 2026-03-09T18:29:44.652 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:44 vm08 bash[38095]: 455fd88e5221: Verifying Checksum 2026-03-09T18:29:44.652 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:44 vm08 bash[38095]: 455fd88e5221: Download complete 2026-03-09T18:29:44.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:44 vm00 bash[17468]: cephadm 2026-03-09T18:29:42.474100+0000 mgr.x (mgr.24751) 43 : cephadm [INF] Deploying daemon prometheus.a on vm08 2026-03-09T18:29:44.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:44 vm00 bash[17468]: cluster 2026-03-09T18:29:42.670809+0000 mgr.x (mgr.24751) 44 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 19 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T18:29:44.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:44 vm00 bash[17468]: audit 2026-03-09T18:29:43.517535+0000 mon.a (mon.0) 811 : audit [INF] from='client.? 192.168.123.100:0/365533971' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/2237172914"}]': finished 2026-03-09T18:29:44.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:44 vm00 bash[17468]: cluster 2026-03-09T18:29:43.517564+0000 mon.a (mon.0) 812 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-09T18:29:44.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:44 vm00 bash[17468]: audit 2026-03-09T18:29:43.719628+0000 mon.a (mon.0) 813 : audit [INF] from='client.? 192.168.123.100:0/4066181341' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/2237172914"}]: dispatch 2026-03-09T18:29:44.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:44 vm00 bash[22468]: cephadm 2026-03-09T18:29:42.474100+0000 mgr.x (mgr.24751) 43 : cephadm [INF] Deploying daemon prometheus.a on vm08 2026-03-09T18:29:44.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:44 vm00 bash[22468]: cluster 2026-03-09T18:29:42.670809+0000 mgr.x (mgr.24751) 44 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 19 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T18:29:44.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:44 vm00 bash[22468]: audit 2026-03-09T18:29:43.517535+0000 mon.a (mon.0) 811 : audit [INF] from='client.? 192.168.123.100:0/365533971' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/2237172914"}]': finished 2026-03-09T18:29:44.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:44 vm00 bash[22468]: cluster 2026-03-09T18:29:43.517564+0000 mon.a (mon.0) 812 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-09T18:29:44.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:44 vm00 bash[22468]: audit 2026-03-09T18:29:43.719628+0000 mon.a (mon.0) 813 : audit [INF] from='client.? 192.168.123.100:0/4066181341' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/2237172914"}]: dispatch 2026-03-09T18:29:44.937 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:44 vm08 bash[38095]: 2abcce694348: Download complete 2026-03-09T18:29:44.937 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:44 vm08 bash[38095]: 2abcce694348: Pull complete 2026-03-09T18:29:44.937 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:44 vm08 bash[38095]: 324153f2810a: Verifying Checksum 2026-03-09T18:29:44.937 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:44 vm08 bash[38095]: 324153f2810a: Download complete 2026-03-09T18:29:44.937 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:44 vm08 bash[38095]: 455fd88e5221: Pull complete 2026-03-09T18:29:45.224 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:44 vm08 bash[38095]: 324153f2810a: Pull complete 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:44 vm08 bash[38095]: Digest: sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:44 vm08 bash[38095]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.7.0 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.069Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.069Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.071Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.071Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.071Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=arp 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=edac 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=os 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=stat 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=time 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=uname 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-09T18:29:45.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-09T18:29:45.226 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[38095]: ts=2026-03-09T18:29:45.072Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-09T18:29:45.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:45 vm00 bash[22468]: audit 2026-03-09T18:29:44.515737+0000 mon.a (mon.0) 814 : audit [INF] from='client.? 192.168.123.100:0/4066181341' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/2237172914"}]': finished 2026-03-09T18:29:45.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:45 vm00 bash[22468]: cluster 2026-03-09T18:29:44.515825+0000 mon.a (mon.0) 815 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-09T18:29:45.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:45 vm00 bash[22468]: cluster 2026-03-09T18:29:44.671074+0000 mgr.x (mgr.24751) 45 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-09T18:29:45.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:45 vm00 bash[22468]: audit 2026-03-09T18:29:44.718613+0000 mon.b (mon.2) 73 : audit [INF] from='client.? 192.168.123.100:0/2927674109' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3488844935"}]: dispatch 2026-03-09T18:29:45.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:45 vm00 bash[22468]: audit 2026-03-09T18:29:44.721205+0000 mon.a (mon.0) 816 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3488844935"}]: dispatch 2026-03-09T18:29:45.880 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:29:45 vm00 bash[17744]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:29:45] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T18:29:45.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:45 vm00 bash[17468]: audit 2026-03-09T18:29:44.515737+0000 mon.a (mon.0) 814 : audit [INF] from='client.? 192.168.123.100:0/4066181341' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/2237172914"}]': finished 2026-03-09T18:29:45.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:45 vm00 bash[17468]: cluster 2026-03-09T18:29:44.515825+0000 mon.a (mon.0) 815 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-09T18:29:45.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:45 vm00 bash[17468]: cluster 2026-03-09T18:29:44.671074+0000 mgr.x (mgr.24751) 45 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-09T18:29:45.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:45 vm00 bash[17468]: audit 2026-03-09T18:29:44.718613+0000 mon.b (mon.2) 73 : audit [INF] from='client.? 192.168.123.100:0/2927674109' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3488844935"}]: dispatch 2026-03-09T18:29:45.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:45 vm00 bash[17468]: audit 2026-03-09T18:29:44.721205+0000 mon.a (mon.0) 816 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3488844935"}]: dispatch 2026-03-09T18:29:45.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[17774]: audit 2026-03-09T18:29:44.515737+0000 mon.a (mon.0) 814 : audit [INF] from='client.? 192.168.123.100:0/4066181341' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/2237172914"}]': finished 2026-03-09T18:29:45.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[17774]: cluster 2026-03-09T18:29:44.515825+0000 mon.a (mon.0) 815 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-09T18:29:45.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[17774]: cluster 2026-03-09T18:29:44.671074+0000 mgr.x (mgr.24751) 45 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-09T18:29:45.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[17774]: audit 2026-03-09T18:29:44.718613+0000 mon.b (mon.2) 73 : audit [INF] from='client.? 192.168.123.100:0/2927674109' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3488844935"}]: dispatch 2026-03-09T18:29:45.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:45 vm08 bash[17774]: audit 2026-03-09T18:29:44.721205+0000 mon.a (mon.0) 816 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3488844935"}]: dispatch 2026-03-09T18:29:46.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:46 vm00 bash[22468]: audit 2026-03-09T18:29:45.526592+0000 mon.a (mon.0) 817 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3488844935"}]': finished 2026-03-09T18:29:46.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:46 vm00 bash[22468]: cluster 2026-03-09T18:29:45.526713+0000 mon.a (mon.0) 818 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-09T18:29:46.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:46 vm00 bash[22468]: audit 2026-03-09T18:29:45.747421+0000 mon.a (mon.0) 819 : audit [INF] from='client.? 192.168.123.100:0/731930486' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2407307906"}]: dispatch 2026-03-09T18:29:46.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:46 vm00 bash[17468]: audit 2026-03-09T18:29:45.526592+0000 mon.a (mon.0) 817 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3488844935"}]': finished 2026-03-09T18:29:46.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:46 vm00 bash[17468]: cluster 2026-03-09T18:29:45.526713+0000 mon.a (mon.0) 818 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-09T18:29:46.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:46 vm00 bash[17468]: audit 2026-03-09T18:29:45.747421+0000 mon.a (mon.0) 819 : audit [INF] from='client.? 192.168.123.100:0/731930486' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2407307906"}]: dispatch 2026-03-09T18:29:46.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:46 vm08 bash[17774]: audit 2026-03-09T18:29:45.526592+0000 mon.a (mon.0) 817 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3488844935"}]': finished 2026-03-09T18:29:46.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:46 vm08 bash[17774]: cluster 2026-03-09T18:29:45.526713+0000 mon.a (mon.0) 818 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-09T18:29:46.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:46 vm08 bash[17774]: audit 2026-03-09T18:29:45.747421+0000 mon.a (mon.0) 819 : audit [INF] from='client.? 192.168.123.100:0/731930486' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2407307906"}]: dispatch 2026-03-09T18:29:48.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:47 vm00 bash[22468]: audit 2026-03-09T18:29:46.595801+0000 mon.a (mon.0) 820 : audit [INF] from='client.? 192.168.123.100:0/731930486' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2407307906"}]': finished 2026-03-09T18:29:48.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:47 vm00 bash[22468]: cluster 2026-03-09T18:29:46.595900+0000 mon.a (mon.0) 821 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-09T18:29:48.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:47 vm00 bash[22468]: cluster 2026-03-09T18:29:46.671409+0000 mgr.x (mgr.24751) 46 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T18:29:48.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:47 vm00 bash[22468]: audit 2026-03-09T18:29:46.799710+0000 mon.a (mon.0) 822 : audit [INF] from='client.? 192.168.123.100:0/1328329140' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1003484560"}]: dispatch 2026-03-09T18:29:48.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:47 vm00 bash[17468]: audit 2026-03-09T18:29:46.595801+0000 mon.a (mon.0) 820 : audit [INF] from='client.? 192.168.123.100:0/731930486' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2407307906"}]': finished 2026-03-09T18:29:48.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:47 vm00 bash[17468]: cluster 2026-03-09T18:29:46.595900+0000 mon.a (mon.0) 821 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-09T18:29:48.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:47 vm00 bash[17468]: cluster 2026-03-09T18:29:46.671409+0000 mgr.x (mgr.24751) 46 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T18:29:48.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:47 vm00 bash[17468]: audit 2026-03-09T18:29:46.799710+0000 mon.a (mon.0) 822 : audit [INF] from='client.? 192.168.123.100:0/1328329140' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1003484560"}]: dispatch 2026-03-09T18:29:48.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:47 vm08 bash[17774]: audit 2026-03-09T18:29:46.595801+0000 mon.a (mon.0) 820 : audit [INF] from='client.? 192.168.123.100:0/731930486' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2407307906"}]': finished 2026-03-09T18:29:48.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:47 vm08 bash[17774]: cluster 2026-03-09T18:29:46.595900+0000 mon.a (mon.0) 821 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-09T18:29:48.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:47 vm08 bash[17774]: cluster 2026-03-09T18:29:46.671409+0000 mgr.x (mgr.24751) 46 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 72 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T18:29:48.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:47 vm08 bash[17774]: audit 2026-03-09T18:29:46.799710+0000 mon.a (mon.0) 822 : audit [INF] from='client.? 192.168.123.100:0/1328329140' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1003484560"}]: dispatch 2026-03-09T18:29:48.224 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:48 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:29:47] "GET /metrics HTTP/1.1" 200 37527 "" "Prometheus/2.33.4" 2026-03-09T18:29:49.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:48 vm00 bash[22468]: audit 2026-03-09T18:29:47.767355+0000 mon.a (mon.0) 823 : audit [INF] from='client.? 192.168.123.100:0/1328329140' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1003484560"}]': finished 2026-03-09T18:29:49.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:48 vm00 bash[22468]: cluster 2026-03-09T18:29:47.767414+0000 mon.a (mon.0) 824 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-09T18:29:49.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:48 vm00 bash[22468]: audit 2026-03-09T18:29:47.967294+0000 mon.c (mon.1) 134 : audit [INF] from='client.? 192.168.123.100:0/2106064297' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2989229681"}]: dispatch 2026-03-09T18:29:49.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:48 vm00 bash[22468]: audit 2026-03-09T18:29:47.968061+0000 mon.a (mon.0) 825 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2989229681"}]: dispatch 2026-03-09T18:29:49.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:48 vm00 bash[17468]: audit 2026-03-09T18:29:47.767355+0000 mon.a (mon.0) 823 : audit [INF] from='client.? 192.168.123.100:0/1328329140' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1003484560"}]': finished 2026-03-09T18:29:49.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:48 vm00 bash[17468]: cluster 2026-03-09T18:29:47.767414+0000 mon.a (mon.0) 824 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-09T18:29:49.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:48 vm00 bash[17468]: audit 2026-03-09T18:29:47.967294+0000 mon.c (mon.1) 134 : audit [INF] from='client.? 192.168.123.100:0/2106064297' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2989229681"}]: dispatch 2026-03-09T18:29:49.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:48 vm00 bash[17468]: audit 2026-03-09T18:29:47.968061+0000 mon.a (mon.0) 825 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2989229681"}]: dispatch 2026-03-09T18:29:49.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:48 vm08 bash[17774]: audit 2026-03-09T18:29:47.767355+0000 mon.a (mon.0) 823 : audit [INF] from='client.? 192.168.123.100:0/1328329140' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1003484560"}]': finished 2026-03-09T18:29:49.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:48 vm08 bash[17774]: cluster 2026-03-09T18:29:47.767414+0000 mon.a (mon.0) 824 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-09T18:29:49.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:48 vm08 bash[17774]: audit 2026-03-09T18:29:47.967294+0000 mon.c (mon.1) 134 : audit [INF] from='client.? 192.168.123.100:0/2106064297' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2989229681"}]: dispatch 2026-03-09T18:29:49.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:48 vm08 bash[17774]: audit 2026-03-09T18:29:47.968061+0000 mon.a (mon.0) 825 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2989229681"}]: dispatch 2026-03-09T18:29:49.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:49 vm00 bash[17468]: cluster 2026-03-09T18:29:48.671840+0000 mgr.x (mgr.24751) 47 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:49.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:49 vm00 bash[17468]: audit 2026-03-09T18:29:48.785528+0000 mon.a (mon.0) 826 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2989229681"}]': finished 2026-03-09T18:29:49.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:49 vm00 bash[17468]: cluster 2026-03-09T18:29:48.785739+0000 mon.a (mon.0) 827 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T18:29:49.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:49 vm00 bash[22468]: cluster 2026-03-09T18:29:48.671840+0000 mgr.x (mgr.24751) 47 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:49.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:49 vm00 bash[22468]: audit 2026-03-09T18:29:48.785528+0000 mon.a (mon.0) 826 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2989229681"}]': finished 2026-03-09T18:29:49.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:49 vm00 bash[22468]: cluster 2026-03-09T18:29:48.785739+0000 mon.a (mon.0) 827 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T18:29:49.880 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:29:49 vm00 bash[50953]: ts=2026-03-09T18:29:49.550Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.002760524s 2026-03-09T18:29:50.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:49 vm08 bash[17774]: cluster 2026-03-09T18:29:48.671840+0000 mgr.x (mgr.24751) 47 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:50.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:49 vm08 bash[17774]: audit 2026-03-09T18:29:48.785528+0000 mon.a (mon.0) 826 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2989229681"}]': finished 2026-03-09T18:29:50.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:49 vm08 bash[17774]: cluster 2026-03-09T18:29:48.785739+0000 mon.a (mon.0) 827 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T18:29:50.951 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:29:50 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:50.951 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:29:50 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:50.951 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:29:50 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:50.951 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:50 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:50.951 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:29:50 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:50.951 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:50 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:50.951 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:50 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:50.951 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:50 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:50.951 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:50 vm08 systemd[1]: Stopping Ceph prometheus.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:29:50.951 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:50 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:51.221 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:51 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:51.221 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:51 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:51.221 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:29:51 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:51.221 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:29:51 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:51.222 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:29:51 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:51.222 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:29:51 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:51.222 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:29:51 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:51.223 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:50 vm08 bash[33963]: ts=2026-03-09T18:29:50.950Z caller=main.go:775 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-09T18:29:51.223 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:50 vm08 bash[33963]: ts=2026-03-09T18:29:50.950Z caller=main.go:798 level=info msg="Stopping scrape discovery manager..." 2026-03-09T18:29:51.223 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:50 vm08 bash[33963]: ts=2026-03-09T18:29:50.950Z caller=main.go:812 level=info msg="Stopping notify discovery manager..." 2026-03-09T18:29:51.223 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:50 vm08 bash[33963]: ts=2026-03-09T18:29:50.950Z caller=main.go:834 level=info msg="Stopping scrape manager..." 2026-03-09T18:29:51.223 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:50 vm08 bash[33963]: ts=2026-03-09T18:29:50.950Z caller=main.go:794 level=info msg="Scrape discovery manager stopped" 2026-03-09T18:29:51.223 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:50 vm08 bash[33963]: ts=2026-03-09T18:29:50.950Z caller=main.go:808 level=info msg="Notify discovery manager stopped" 2026-03-09T18:29:51.223 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:50 vm08 bash[33963]: ts=2026-03-09T18:29:50.951Z caller=manager.go:945 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-09T18:29:51.223 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:50 vm08 bash[33963]: ts=2026-03-09T18:29:50.951Z caller=main.go:828 level=info msg="Scrape manager stopped" 2026-03-09T18:29:51.223 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:50 vm08 bash[33963]: ts=2026-03-09T18:29:50.951Z caller=manager.go:955 level=info component="rule manager" msg="Rule manager stopped" 2026-03-09T18:29:51.223 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:50 vm08 bash[33963]: ts=2026-03-09T18:29:50.952Z caller=notifier.go:600 level=info component=notifier msg="Stopping notification manager..." 2026-03-09T18:29:51.223 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:50 vm08 bash[33963]: ts=2026-03-09T18:29:50.952Z caller=main.go:1054 level=info msg="Notifier manager stopped" 2026-03-09T18:29:51.223 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:50 vm08 bash[33963]: ts=2026-03-09T18:29:50.952Z caller=main.go:1066 level=info msg="See you next time!" 2026-03-09T18:29:51.223 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:50 vm08 bash[38430]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-prometheus-a 2026-03-09T18:29:51.223 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:51 vm08 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@prometheus.a.service: Deactivated successfully. 2026-03-09T18:29:51.223 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:51 vm08 systemd[1]: Stopped Ceph prometheus.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:29:51.223 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:51 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:51.223 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:51 vm08 systemd[1]: Started Ceph prometheus.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:29:51.224 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:29:51 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:29:51.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:51 vm08 bash[36576]: [09/Mar/2026:18:29:51] ENGINE Bus STOPPING 2026-03-09T18:29:51.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:51 vm08 bash[36576]: [09/Mar/2026:18:29:51] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T18:29:51.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:51 vm08 bash[36576]: [09/Mar/2026:18:29:51] ENGINE Bus STOPPED 2026-03-09T18:29:51.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:51 vm08 bash[36576]: [09/Mar/2026:18:29:51] ENGINE Bus STARTING 2026-03-09T18:29:51.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:51 vm08 bash[38540]: ts=2026-03-09T18:29:51.375Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-09T18:29:51.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:51 vm08 bash[38540]: ts=2026-03-09T18:29:51.375Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-09T18:29:51.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:51 vm08 bash[38540]: ts=2026-03-09T18:29:51.376Z caller=main.go:623 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm08 (none))" 2026-03-09T18:29:51.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:51 vm08 bash[38540]: ts=2026-03-09T18:29:51.376Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-09T18:29:51.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:51 vm08 bash[38540]: ts=2026-03-09T18:29:51.376Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-09T18:29:51.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:51 vm08 bash[38540]: ts=2026-03-09T18:29:51.378Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-09T18:29:51.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:51 vm08 bash[38540]: ts=2026-03-09T18:29:51.378Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-09T18:29:51.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:51 vm08 bash[38540]: ts=2026-03-09T18:29:51.385Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-09T18:29:51.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:51 vm08 bash[38540]: ts=2026-03-09T18:29:51.385Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.413µs 2026-03-09T18:29:51.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:51 vm08 bash[38540]: ts=2026-03-09T18:29:51.385Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-09T18:29:51.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:51 vm08 bash[38540]: ts=2026-03-09T18:29:51.391Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-09T18:29:51.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:51 vm08 bash[38540]: ts=2026-03-09T18:29:51.391Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-09T18:29:51.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:51 vm08 bash[38540]: ts=2026-03-09T18:29:51.404Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=2 2026-03-09T18:29:51.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:51 vm08 bash[38540]: ts=2026-03-09T18:29:51.427Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=2 2026-03-09T18:29:51.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:51 vm08 bash[38540]: ts=2026-03-09T18:29:51.427Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=2 maxSegment=2 2026-03-09T18:29:51.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:51 vm08 bash[38540]: ts=2026-03-09T18:29:51.428Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=126.597µs wal_replay_duration=42.524209ms wbl_replay_duration=120ns total_replay_duration=42.66325ms 2026-03-09T18:29:51.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:51 vm08 bash[38540]: ts=2026-03-09T18:29:51.429Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-09T18:29:51.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:51 vm08 bash[38540]: ts=2026-03-09T18:29:51.429Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-09T18:29:51.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:51 vm08 bash[38540]: ts=2026-03-09T18:29:51.429Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-09T18:29:51.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:51 vm08 bash[38540]: ts=2026-03-09T18:29:51.455Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=25.574986ms db_storage=1.653µs remote_storage=1.071µs web_handler=671ns query_engine=1.011µs scrape=792.864µs scrape_sd=122.029µs notify=8.356µs notify_sd=6.403µs rules=24.243773ms tracing=7.504µs 2026-03-09T18:29:51.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:51 vm08 bash[38540]: ts=2026-03-09T18:29:51.455Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-09T18:29:51.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:29:51 vm08 bash[38540]: ts=2026-03-09T18:29:51.455Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-09T18:29:51.975 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:51 vm08 bash[36576]: [09/Mar/2026:18:29:51] ENGINE Serving on http://:::9283 2026-03-09T18:29:51.976 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:51 vm08 bash[36576]: [09/Mar/2026:18:29:51] ENGINE Bus STARTED 2026-03-09T18:29:51.976 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:51 vm08 bash[36576]: [09/Mar/2026:18:29:51] ENGINE Bus STOPPING 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: cluster 2026-03-09T18:29:50.672213+0000 mgr.x (mgr.24751) 48 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 994 B/s rd, 0 op/s 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.133115+0000 mgr.x (mgr.24751) 49 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.261027+0000 mon.a (mon.0) 828 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.268601+0000 mon.a (mon.0) 829 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.271559+0000 mon.b (mon.2) 74 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.271897+0000 mgr.x (mgr.24751) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.272957+0000 mon.b (mon.2) 75 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.273199+0000 mgr.x (mgr.24751) 51 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.281642+0000 mon.a (mon.0) 830 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.289422+0000 mon.b (mon.2) 76 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.289798+0000 mgr.x (mgr.24751) 52 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: cephadm 2026-03-09T18:29:51.300575+0000 mgr.x (mgr.24751) 53 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.100:5000 to Dashboard 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.301112+0000 mon.a (mon.0) 831 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.301320+0000 mon.b (mon.2) 77 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.301646+0000 mgr.x (mgr.24751) 54 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.303514+0000 mon.b (mon.2) 78 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.303878+0000 mgr.x (mgr.24751) 55 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.311361+0000 mon.a (mon.0) 832 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.315007+0000 mon.b (mon.2) 79 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.316316+0000 mgr.x (mgr.24751) 56 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.320440+0000 mon.b (mon.2) 80 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm08.local:3000"}]: dispatch 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.320712+0000 mgr.x (mgr.24751) 57 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm08.local:3000"}]: dispatch 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.329368+0000 mon.a (mon.0) 833 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.335535+0000 mon.b (mon.2) 81 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.335844+0000 mgr.x (mgr.24751) 58 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.336333+0000 mon.b (mon.2) 82 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm08.local:9095"}]: dispatch 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.336577+0000 mgr.x (mgr.24751) 59 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm08.local:9095"}]: dispatch 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.343909+0000 mon.a (mon.0) 834 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.391083+0000 mon.b (mon.2) 83 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.712272+0000 mon.a (mon.0) 835 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.721297+0000 mon.a (mon.0) 836 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.726748+0000 mon.a (mon.0) 837 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:52 vm08 bash[17774]: audit 2026-03-09T18:29:51.733648+0000 mon.a (mon.0) 838 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:52 vm08 bash[36576]: [09/Mar/2026:18:29:52] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:52 vm08 bash[36576]: [09/Mar/2026:18:29:52] ENGINE Bus STOPPED 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:52 vm08 bash[36576]: [09/Mar/2026:18:29:52] ENGINE Bus STARTING 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:52 vm08 bash[36576]: [09/Mar/2026:18:29:52] ENGINE Serving on http://:::9283 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:52 vm08 bash[36576]: [09/Mar/2026:18:29:52] ENGINE Bus STARTED 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:52 vm08 bash[36576]: [09/Mar/2026:18:29:52] ENGINE Bus STOPPING 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:52 vm08 bash[36576]: [09/Mar/2026:18:29:52] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:52 vm08 bash[36576]: [09/Mar/2026:18:29:52] ENGINE Bus STOPPED 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:52 vm08 bash[36576]: [09/Mar/2026:18:29:52] ENGINE Bus STARTING 2026-03-09T18:29:52.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:52 vm08 bash[36576]: [09/Mar/2026:18:29:52] ENGINE Serving on http://:::9283 2026-03-09T18:29:52.476 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:29:52 vm08 bash[36576]: [09/Mar/2026:18:29:52] ENGINE Bus STARTED 2026-03-09T18:29:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: cluster 2026-03-09T18:29:50.672213+0000 mgr.x (mgr.24751) 48 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 994 B/s rd, 0 op/s 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.133115+0000 mgr.x (mgr.24751) 49 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.261027+0000 mon.a (mon.0) 828 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.268601+0000 mon.a (mon.0) 829 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.271559+0000 mon.b (mon.2) 74 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.271897+0000 mgr.x (mgr.24751) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.272957+0000 mon.b (mon.2) 75 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.273199+0000 mgr.x (mgr.24751) 51 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.281642+0000 mon.a (mon.0) 830 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.289422+0000 mon.b (mon.2) 76 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.289798+0000 mgr.x (mgr.24751) 52 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: cephadm 2026-03-09T18:29:51.300575+0000 mgr.x (mgr.24751) 53 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.100:5000 to Dashboard 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.301112+0000 mon.a (mon.0) 831 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.301320+0000 mon.b (mon.2) 77 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.301646+0000 mgr.x (mgr.24751) 54 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.303514+0000 mon.b (mon.2) 78 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.303878+0000 mgr.x (mgr.24751) 55 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.311361+0000 mon.a (mon.0) 832 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.315007+0000 mon.b (mon.2) 79 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.316316+0000 mgr.x (mgr.24751) 56 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.320440+0000 mon.b (mon.2) 80 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm08.local:3000"}]: dispatch 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.320712+0000 mgr.x (mgr.24751) 57 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm08.local:3000"}]: dispatch 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.329368+0000 mon.a (mon.0) 833 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.335535+0000 mon.b (mon.2) 81 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.335844+0000 mgr.x (mgr.24751) 58 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.336333+0000 mon.b (mon.2) 82 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm08.local:9095"}]: dispatch 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.336577+0000 mgr.x (mgr.24751) 59 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm08.local:9095"}]: dispatch 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.343909+0000 mon.a (mon.0) 834 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.391083+0000 mon.b (mon.2) 83 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.712272+0000 mon.a (mon.0) 835 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.721297+0000 mon.a (mon.0) 836 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.726748+0000 mon.a (mon.0) 837 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:52 vm00 bash[22468]: audit 2026-03-09T18:29:51.733648+0000 mon.a (mon.0) 838 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: cluster 2026-03-09T18:29:50.672213+0000 mgr.x (mgr.24751) 48 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 994 B/s rd, 0 op/s 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.133115+0000 mgr.x (mgr.24751) 49 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.261027+0000 mon.a (mon.0) 828 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.268601+0000 mon.a (mon.0) 829 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.271559+0000 mon.b (mon.2) 74 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.271897+0000 mgr.x (mgr.24751) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.272957+0000 mon.b (mon.2) 75 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.273199+0000 mgr.x (mgr.24751) 51 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.281642+0000 mon.a (mon.0) 830 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.289422+0000 mon.b (mon.2) 76 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.289798+0000 mgr.x (mgr.24751) 52 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: cephadm 2026-03-09T18:29:51.300575+0000 mgr.x (mgr.24751) 53 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.100:5000 to Dashboard 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.301112+0000 mon.a (mon.0) 831 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.301320+0000 mon.b (mon.2) 77 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.301646+0000 mgr.x (mgr.24751) 54 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.303514+0000 mon.b (mon.2) 78 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.303878+0000 mgr.x (mgr.24751) 55 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.311361+0000 mon.a (mon.0) 832 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.315007+0000 mon.b (mon.2) 79 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.316316+0000 mgr.x (mgr.24751) 56 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.320440+0000 mon.b (mon.2) 80 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm08.local:3000"}]: dispatch 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.320712+0000 mgr.x (mgr.24751) 57 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm08.local:3000"}]: dispatch 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.329368+0000 mon.a (mon.0) 833 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.335535+0000 mon.b (mon.2) 81 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.335844+0000 mgr.x (mgr.24751) 58 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.336333+0000 mon.b (mon.2) 82 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm08.local:9095"}]: dispatch 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.336577+0000 mgr.x (mgr.24751) 59 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm08.local:9095"}]: dispatch 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.343909+0000 mon.a (mon.0) 834 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.391083+0000 mon.b (mon.2) 83 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.712272+0000 mon.a (mon.0) 835 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.721297+0000 mon.a (mon.0) 836 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.726748+0000 mon.a (mon.0) 837 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:52.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:52 vm00 bash[17468]: audit 2026-03-09T18:29:51.733648+0000 mon.a (mon.0) 838 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:54 vm00 bash[17468]: cluster 2026-03-09T18:29:52.672542+0000 mgr.x (mgr.24751) 60 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 841 B/s rd, 0 op/s 2026-03-09T18:29:54.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:54 vm00 bash[22468]: cluster 2026-03-09T18:29:52.672542+0000 mgr.x (mgr.24751) 60 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 841 B/s rd, 0 op/s 2026-03-09T18:29:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:54 vm08 bash[17774]: cluster 2026-03-09T18:29:52.672542+0000 mgr.x (mgr.24751) 60 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 841 B/s rd, 0 op/s 2026-03-09T18:29:56.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:56 vm00 bash[17468]: cluster 2026-03-09T18:29:54.673069+0000 mgr.x (mgr.24751) 61 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:56.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:56 vm00 bash[22468]: cluster 2026-03-09T18:29:54.673069+0000 mgr.x (mgr.24751) 61 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:56.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:56 vm08 bash[17774]: cluster 2026-03-09T18:29:54.673069+0000 mgr.x (mgr.24751) 61 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:29:58.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:58 vm00 bash[22468]: cluster 2026-03-09T18:29:56.673382+0000 mgr.x (mgr.24751) 62 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 574 B/s rd, 0 op/s 2026-03-09T18:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:58 vm00 bash[22468]: audit 2026-03-09T18:29:57.080328+0000 mon.a (mon.0) 839 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:58 vm00 bash[22468]: audit 2026-03-09T18:29:57.088135+0000 mon.a (mon.0) 840 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:58 vm00 bash[22468]: audit 2026-03-09T18:29:57.095804+0000 mon.b (mon.2) 84 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:58 vm00 bash[22468]: audit 2026-03-09T18:29:57.117850+0000 mon.a (mon.0) 841 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:58 vm00 bash[22468]: audit 2026-03-09T18:29:57.124484+0000 mon.b (mon.2) 85 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:58 vm00 bash[22468]: audit 2026-03-09T18:29:57.124908+0000 mon.a (mon.0) 842 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:58 vm00 bash[22468]: audit 2026-03-09T18:29:57.125345+0000 mon.b (mon.2) 86 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:29:58.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:29:58 vm00 bash[22468]: audit 2026-03-09T18:29:57.132816+0000 mon.a (mon.0) 843 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:58.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:58 vm00 bash[17468]: cluster 2026-03-09T18:29:56.673382+0000 mgr.x (mgr.24751) 62 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 574 B/s rd, 0 op/s 2026-03-09T18:29:58.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:58 vm00 bash[17468]: audit 2026-03-09T18:29:57.080328+0000 mon.a (mon.0) 839 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:58.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:58 vm00 bash[17468]: audit 2026-03-09T18:29:57.088135+0000 mon.a (mon.0) 840 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:58.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:58 vm00 bash[17468]: audit 2026-03-09T18:29:57.095804+0000 mon.b (mon.2) 84 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:29:58.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:58 vm00 bash[17468]: audit 2026-03-09T18:29:57.117850+0000 mon.a (mon.0) 841 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:58.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:58 vm00 bash[17468]: audit 2026-03-09T18:29:57.124484+0000 mon.b (mon.2) 85 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:29:58.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:58 vm00 bash[17468]: audit 2026-03-09T18:29:57.124908+0000 mon.a (mon.0) 842 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:58.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:58 vm00 bash[17468]: audit 2026-03-09T18:29:57.125345+0000 mon.b (mon.2) 86 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:29:58.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:29:58 vm00 bash[17468]: audit 2026-03-09T18:29:57.132816+0000 mon.a (mon.0) 843 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:58.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:58 vm08 bash[17774]: cluster 2026-03-09T18:29:56.673382+0000 mgr.x (mgr.24751) 62 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 574 B/s rd, 0 op/s 2026-03-09T18:29:58.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:58 vm08 bash[17774]: audit 2026-03-09T18:29:57.080328+0000 mon.a (mon.0) 839 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:58.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:58 vm08 bash[17774]: audit 2026-03-09T18:29:57.088135+0000 mon.a (mon.0) 840 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:58.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:58 vm08 bash[17774]: audit 2026-03-09T18:29:57.095804+0000 mon.b (mon.2) 84 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:29:58.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:58 vm08 bash[17774]: audit 2026-03-09T18:29:57.117850+0000 mon.a (mon.0) 841 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:58.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:58 vm08 bash[17774]: audit 2026-03-09T18:29:57.124484+0000 mon.b (mon.2) 85 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:29:58.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:58 vm08 bash[17774]: audit 2026-03-09T18:29:57.124908+0000 mon.a (mon.0) 842 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:29:58.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:58 vm08 bash[17774]: audit 2026-03-09T18:29:57.125345+0000 mon.b (mon.2) 86 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:29:58.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:29:58 vm08 bash[17774]: audit 2026-03-09T18:29:57.132816+0000 mon.a (mon.0) 843 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:30:00.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:00 vm00 bash[17468]: cluster 2026-03-09T18:29:58.673979+0000 mgr.x (mgr.24751) 63 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:30:00.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:00 vm00 bash[17468]: cluster 2026-03-09T18:30:00.000109+0000 mon.a (mon.0) 844 : cluster [INF] overall HEALTH_OK 2026-03-09T18:30:00.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:00 vm00 bash[22468]: cluster 2026-03-09T18:29:58.673979+0000 mgr.x (mgr.24751) 63 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:30:00.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:00 vm00 bash[22468]: cluster 2026-03-09T18:30:00.000109+0000 mon.a (mon.0) 844 : cluster [INF] overall HEALTH_OK 2026-03-09T18:30:00.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:00 vm08 bash[17774]: cluster 2026-03-09T18:29:58.673979+0000 mgr.x (mgr.24751) 63 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:30:00.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:00 vm08 bash[17774]: cluster 2026-03-09T18:30:00.000109+0000 mon.a (mon.0) 844 : cluster [INF] overall HEALTH_OK 2026-03-09T18:30:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:02 vm08 bash[17774]: cluster 2026-03-09T18:30:00.674326+0000 mgr.x (mgr.24751) 64 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 860 B/s rd, 0 op/s 2026-03-09T18:30:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:02 vm08 bash[17774]: audit 2026-03-09T18:30:01.143357+0000 mgr.x (mgr.24751) 65 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:02.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:02 vm00 bash[17468]: cluster 2026-03-09T18:30:00.674326+0000 mgr.x (mgr.24751) 64 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 860 B/s rd, 0 op/s 2026-03-09T18:30:02.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:02 vm00 bash[17468]: audit 2026-03-09T18:30:01.143357+0000 mgr.x (mgr.24751) 65 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:02.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:02 vm00 bash[22468]: cluster 2026-03-09T18:30:00.674326+0000 mgr.x (mgr.24751) 64 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 860 B/s rd, 0 op/s 2026-03-09T18:30:02.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:02 vm00 bash[22468]: audit 2026-03-09T18:30:01.143357+0000 mgr.x (mgr.24751) 65 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:03.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:30:03 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:30:03] "GET /metrics HTTP/1.1" 200 37528 "" "Prometheus/2.51.0" 2026-03-09T18:30:04.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:04 vm08 bash[17774]: cluster 2026-03-09T18:30:02.674656+0000 mgr.x (mgr.24751) 66 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:04.474 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:30:04 vm08 bash[38540]: ts=2026-03-09T18:30:04.148Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:30:04.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:04 vm00 bash[17468]: cluster 2026-03-09T18:30:02.674656+0000 mgr.x (mgr.24751) 66 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:04.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:04 vm00 bash[22468]: cluster 2026-03-09T18:30:02.674656+0000 mgr.x (mgr.24751) 66 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:06.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:06 vm08 bash[17774]: cluster 2026-03-09T18:30:04.675169+0000 mgr.x (mgr.24751) 67 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:06.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:06 vm00 bash[22468]: cluster 2026-03-09T18:30:04.675169+0000 mgr.x (mgr.24751) 67 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:06.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:06 vm00 bash[17468]: cluster 2026-03-09T18:30:04.675169+0000 mgr.x (mgr.24751) 67 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:07.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:30:06 vm08 bash[38540]: ts=2026-03-09T18:30:06.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm00\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:30:08.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:08 vm08 bash[17774]: cluster 2026-03-09T18:30:06.675491+0000 mgr.x (mgr.24751) 68 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:08.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:08 vm00 bash[17468]: cluster 2026-03-09T18:30:06.675491+0000 mgr.x (mgr.24751) 68 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:08.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:08 vm00 bash[22468]: cluster 2026-03-09T18:30:06.675491+0000 mgr.x (mgr.24751) 68 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:10.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:10 vm08 bash[17774]: cluster 2026-03-09T18:30:08.676089+0000 mgr.x (mgr.24751) 69 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:10.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:10 vm00 bash[17468]: cluster 2026-03-09T18:30:08.676089+0000 mgr.x (mgr.24751) 69 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:10.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:10 vm00 bash[22468]: cluster 2026-03-09T18:30:08.676089+0000 mgr.x (mgr.24751) 69 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:12 vm08 bash[17774]: cluster 2026-03-09T18:30:10.676430+0000 mgr.x (mgr.24751) 70 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:12 vm08 bash[17774]: audit 2026-03-09T18:30:11.151172+0000 mgr.x (mgr.24751) 71 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:12 vm08 bash[17774]: audit 2026-03-09T18:30:12.095971+0000 mon.b (mon.2) 87 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:30:12.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:12 vm00 bash[17468]: cluster 2026-03-09T18:30:10.676430+0000 mgr.x (mgr.24751) 70 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:12.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:12 vm00 bash[17468]: audit 2026-03-09T18:30:11.151172+0000 mgr.x (mgr.24751) 71 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:12.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:12 vm00 bash[17468]: audit 2026-03-09T18:30:12.095971+0000 mon.b (mon.2) 87 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:30:12.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:12 vm00 bash[22468]: cluster 2026-03-09T18:30:10.676430+0000 mgr.x (mgr.24751) 70 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:12.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:12 vm00 bash[22468]: audit 2026-03-09T18:30:11.151172+0000 mgr.x (mgr.24751) 71 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:12.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:12 vm00 bash[22468]: audit 2026-03-09T18:30:12.095971+0000 mon.b (mon.2) 87 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:30:13.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:30:13 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:30:13] "GET /metrics HTTP/1.1" 200 37519 "" "Prometheus/2.51.0" 2026-03-09T18:30:14.428 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:30:14 vm08 bash[38540]: ts=2026-03-09T18:30:14.148Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:30:14.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:14 vm08 bash[17774]: cluster 2026-03-09T18:30:12.676841+0000 mgr.x (mgr.24751) 72 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:14.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:14 vm00 bash[17468]: cluster 2026-03-09T18:30:12.676841+0000 mgr.x (mgr.24751) 72 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:14.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:14 vm00 bash[22468]: cluster 2026-03-09T18:30:12.676841+0000 mgr.x (mgr.24751) 72 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:16.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:16 vm08 bash[17774]: cluster 2026-03-09T18:30:14.677401+0000 mgr.x (mgr.24751) 73 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:16.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:16 vm00 bash[17468]: cluster 2026-03-09T18:30:14.677401+0000 mgr.x (mgr.24751) 73 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:16.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:16 vm00 bash[22468]: cluster 2026-03-09T18:30:14.677401+0000 mgr.x (mgr.24751) 73 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:17.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:30:16 vm08 bash[38540]: ts=2026-03-09T18:30:16.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm00\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:30:18.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:18 vm08 bash[17774]: cluster 2026-03-09T18:30:16.677653+0000 mgr.x (mgr.24751) 74 : cluster [DBG] pgmap v35: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:18.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:18 vm00 bash[17468]: cluster 2026-03-09T18:30:16.677653+0000 mgr.x (mgr.24751) 74 : cluster [DBG] pgmap v35: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:18.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:18 vm00 bash[22468]: cluster 2026-03-09T18:30:16.677653+0000 mgr.x (mgr.24751) 74 : cluster [DBG] pgmap v35: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:20.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:20 vm08 bash[17774]: cluster 2026-03-09T18:30:18.678148+0000 mgr.x (mgr.24751) 75 : cluster [DBG] pgmap v36: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:20.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:20 vm00 bash[17468]: cluster 2026-03-09T18:30:18.678148+0000 mgr.x (mgr.24751) 75 : cluster [DBG] pgmap v36: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:20.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:20 vm00 bash[22468]: cluster 2026-03-09T18:30:18.678148+0000 mgr.x (mgr.24751) 75 : cluster [DBG] pgmap v36: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:22.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:22 vm08 bash[17774]: cluster 2026-03-09T18:30:20.678474+0000 mgr.x (mgr.24751) 76 : cluster [DBG] pgmap v37: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:22.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:22 vm08 bash[17774]: audit 2026-03-09T18:30:21.156529+0000 mgr.x (mgr.24751) 77 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:22.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:22 vm00 bash[22468]: cluster 2026-03-09T18:30:20.678474+0000 mgr.x (mgr.24751) 76 : cluster [DBG] pgmap v37: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:22.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:22 vm00 bash[22468]: audit 2026-03-09T18:30:21.156529+0000 mgr.x (mgr.24751) 77 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:22.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:22 vm00 bash[17468]: cluster 2026-03-09T18:30:20.678474+0000 mgr.x (mgr.24751) 76 : cluster [DBG] pgmap v37: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:22.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:22 vm00 bash[17468]: audit 2026-03-09T18:30:21.156529+0000 mgr.x (mgr.24751) 77 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:23.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:30:23 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:30:23] "GET /metrics HTTP/1.1" 200 37519 "" "Prometheus/2.51.0" 2026-03-09T18:30:24.466 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:30:24 vm08 bash[38540]: ts=2026-03-09T18:30:24.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:30:24.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:24 vm08 bash[17774]: cluster 2026-03-09T18:30:22.678756+0000 mgr.x (mgr.24751) 78 : cluster [DBG] pgmap v38: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:24.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:24 vm00 bash[22468]: cluster 2026-03-09T18:30:22.678756+0000 mgr.x (mgr.24751) 78 : cluster [DBG] pgmap v38: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:24.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:24 vm00 bash[17468]: cluster 2026-03-09T18:30:22.678756+0000 mgr.x (mgr.24751) 78 : cluster [DBG] pgmap v38: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:25.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:25 vm00 bash[22468]: cluster 2026-03-09T18:30:24.679241+0000 mgr.x (mgr.24751) 79 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:25.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:25 vm00 bash[17468]: cluster 2026-03-09T18:30:24.679241+0000 mgr.x (mgr.24751) 79 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:25.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:25 vm08 bash[17774]: cluster 2026-03-09T18:30:24.679241+0000 mgr.x (mgr.24751) 79 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:27.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:30:26 vm08 bash[38540]: ts=2026-03-09T18:30:26.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm00\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:30:27.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:27 vm08 bash[17774]: audit 2026-03-09T18:30:27.097631+0000 mon.b (mon.2) 88 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:30:27.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:27 vm00 bash[22468]: audit 2026-03-09T18:30:27.097631+0000 mon.b (mon.2) 88 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:30:27.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:27 vm00 bash[17468]: audit 2026-03-09T18:30:27.097631+0000 mon.b (mon.2) 88 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:30:28.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:28 vm08 bash[17774]: cluster 2026-03-09T18:30:26.679514+0000 mgr.x (mgr.24751) 80 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:28.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:28 vm00 bash[22468]: cluster 2026-03-09T18:30:26.679514+0000 mgr.x (mgr.24751) 80 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:28.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:28 vm00 bash[17468]: cluster 2026-03-09T18:30:26.679514+0000 mgr.x (mgr.24751) 80 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:30.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:30 vm08 bash[17774]: cluster 2026-03-09T18:30:28.679985+0000 mgr.x (mgr.24751) 81 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:30.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:30 vm00 bash[17468]: cluster 2026-03-09T18:30:28.679985+0000 mgr.x (mgr.24751) 81 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:30 vm00 bash[22468]: cluster 2026-03-09T18:30:28.679985+0000 mgr.x (mgr.24751) 81 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:32.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:32 vm08 bash[17774]: cluster 2026-03-09T18:30:30.680316+0000 mgr.x (mgr.24751) 82 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:32.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:32 vm08 bash[17774]: audit 2026-03-09T18:30:31.167093+0000 mgr.x (mgr.24751) 83 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:32.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:32 vm00 bash[22468]: cluster 2026-03-09T18:30:30.680316+0000 mgr.x (mgr.24751) 82 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:32.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:32 vm00 bash[22468]: audit 2026-03-09T18:30:31.167093+0000 mgr.x (mgr.24751) 83 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:32.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:32 vm00 bash[17468]: cluster 2026-03-09T18:30:30.680316+0000 mgr.x (mgr.24751) 82 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:32.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:32 vm00 bash[17468]: audit 2026-03-09T18:30:31.167093+0000 mgr.x (mgr.24751) 83 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:33.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:30:33 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:30:33] "GET /metrics HTTP/1.1" 200 37527 "" "Prometheus/2.51.0" 2026-03-09T18:30:34.417 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:30:34 vm08 bash[38540]: ts=2026-03-09T18:30:34.148Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:30:34.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:34 vm08 bash[17774]: cluster 2026-03-09T18:30:32.680698+0000 mgr.x (mgr.24751) 84 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:34.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:34 vm00 bash[22468]: cluster 2026-03-09T18:30:32.680698+0000 mgr.x (mgr.24751) 84 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:34.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:34 vm00 bash[17468]: cluster 2026-03-09T18:30:32.680698+0000 mgr.x (mgr.24751) 84 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:36.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:36 vm08 bash[17774]: cluster 2026-03-09T18:30:34.681285+0000 mgr.x (mgr.24751) 85 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:36.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:36 vm00 bash[22468]: cluster 2026-03-09T18:30:34.681285+0000 mgr.x (mgr.24751) 85 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:36.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:36 vm00 bash[17468]: cluster 2026-03-09T18:30:34.681285+0000 mgr.x (mgr.24751) 85 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:37.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:30:36 vm08 bash[38540]: ts=2026-03-09T18:30:36.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm00\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:30:38.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:38 vm08 bash[17774]: cluster 2026-03-09T18:30:36.681571+0000 mgr.x (mgr.24751) 86 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:38.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:38 vm00 bash[22468]: cluster 2026-03-09T18:30:36.681571+0000 mgr.x (mgr.24751) 86 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:38.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:38 vm00 bash[17468]: cluster 2026-03-09T18:30:36.681571+0000 mgr.x (mgr.24751) 86 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:40.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:40 vm08 bash[17774]: cluster 2026-03-09T18:30:38.682158+0000 mgr.x (mgr.24751) 87 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:40.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:40 vm00 bash[22468]: cluster 2026-03-09T18:30:38.682158+0000 mgr.x (mgr.24751) 87 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:40.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:40 vm00 bash[17468]: cluster 2026-03-09T18:30:38.682158+0000 mgr.x (mgr.24751) 87 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:42.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:42 vm08 bash[17774]: cluster 2026-03-09T18:30:40.682449+0000 mgr.x (mgr.24751) 88 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:42.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:42 vm08 bash[17774]: audit 2026-03-09T18:30:41.172716+0000 mgr.x (mgr.24751) 89 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:42.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:42 vm08 bash[17774]: audit 2026-03-09T18:30:42.097954+0000 mon.b (mon.2) 89 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:30:42.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:42 vm00 bash[22468]: cluster 2026-03-09T18:30:40.682449+0000 mgr.x (mgr.24751) 88 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:42.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:42 vm00 bash[22468]: audit 2026-03-09T18:30:41.172716+0000 mgr.x (mgr.24751) 89 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:42.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:42 vm00 bash[22468]: audit 2026-03-09T18:30:42.097954+0000 mon.b (mon.2) 89 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:30:42.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:42 vm00 bash[17468]: cluster 2026-03-09T18:30:40.682449+0000 mgr.x (mgr.24751) 88 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:42.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:42 vm00 bash[17468]: audit 2026-03-09T18:30:41.172716+0000 mgr.x (mgr.24751) 89 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:42.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:42 vm00 bash[17468]: audit 2026-03-09T18:30:42.097954+0000 mon.b (mon.2) 89 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:30:43.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:30:43 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:30:43] "GET /metrics HTTP/1.1" 200 37523 "" "Prometheus/2.51.0" 2026-03-09T18:30:44.474 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:30:44 vm08 bash[38540]: ts=2026-03-09T18:30:44.148Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:30:44.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:44 vm00 bash[22468]: cluster 2026-03-09T18:30:42.682717+0000 mgr.x (mgr.24751) 90 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:44.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:44 vm00 bash[17468]: cluster 2026-03-09T18:30:42.682717+0000 mgr.x (mgr.24751) 90 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:44.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:44 vm08 bash[17774]: cluster 2026-03-09T18:30:42.682717+0000 mgr.x (mgr.24751) 90 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:45.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:45 vm00 bash[22468]: cluster 2026-03-09T18:30:44.683268+0000 mgr.x (mgr.24751) 91 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:45.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:45 vm00 bash[17468]: cluster 2026-03-09T18:30:44.683268+0000 mgr.x (mgr.24751) 91 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:45.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:45 vm08 bash[17774]: cluster 2026-03-09T18:30:44.683268+0000 mgr.x (mgr.24751) 91 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:47.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:30:46 vm08 bash[38540]: ts=2026-03-09T18:30:46.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm00\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:30:48.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:48 vm08 bash[17774]: cluster 2026-03-09T18:30:46.683520+0000 mgr.x (mgr.24751) 92 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:48.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:48 vm00 bash[22468]: cluster 2026-03-09T18:30:46.683520+0000 mgr.x (mgr.24751) 92 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:48.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:48 vm00 bash[17468]: cluster 2026-03-09T18:30:46.683520+0000 mgr.x (mgr.24751) 92 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:50.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:50 vm08 bash[17774]: cluster 2026-03-09T18:30:48.684014+0000 mgr.x (mgr.24751) 93 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:50.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:50 vm00 bash[22468]: cluster 2026-03-09T18:30:48.684014+0000 mgr.x (mgr.24751) 93 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:50.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:50 vm00 bash[17468]: cluster 2026-03-09T18:30:48.684014+0000 mgr.x (mgr.24751) 93 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:52.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:52 vm08 bash[17774]: cluster 2026-03-09T18:30:50.684345+0000 mgr.x (mgr.24751) 94 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:52.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:52 vm08 bash[17774]: audit 2026-03-09T18:30:51.180862+0000 mgr.x (mgr.24751) 95 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:52 vm00 bash[22468]: cluster 2026-03-09T18:30:50.684345+0000 mgr.x (mgr.24751) 94 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:52 vm00 bash[22468]: audit 2026-03-09T18:30:51.180862+0000 mgr.x (mgr.24751) 95 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:52.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:52 vm00 bash[17468]: cluster 2026-03-09T18:30:50.684345+0000 mgr.x (mgr.24751) 94 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:52.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:52 vm00 bash[17468]: audit 2026-03-09T18:30:51.180862+0000 mgr.x (mgr.24751) 95 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:30:53.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:30:53 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:30:53] "GET /metrics HTTP/1.1" 200 37523 "" "Prometheus/2.51.0" 2026-03-09T18:30:54.421 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:30:54 vm08 bash[38540]: ts=2026-03-09T18:30:54.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:30:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:54 vm08 bash[17774]: cluster 2026-03-09T18:30:52.684584+0000 mgr.x (mgr.24751) 96 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:54.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:54 vm00 bash[22468]: cluster 2026-03-09T18:30:52.684584+0000 mgr.x (mgr.24751) 96 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:54.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:54 vm00 bash[17468]: cluster 2026-03-09T18:30:52.684584+0000 mgr.x (mgr.24751) 96 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:56.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:56 vm08 bash[17774]: cluster 2026-03-09T18:30:54.685122+0000 mgr.x (mgr.24751) 97 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:56 vm00 bash[22468]: cluster 2026-03-09T18:30:54.685122+0000 mgr.x (mgr.24751) 97 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:56.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:56 vm00 bash[17468]: cluster 2026-03-09T18:30:54.685122+0000 mgr.x (mgr.24751) 97 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:30:57.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:30:56 vm08 bash[38540]: ts=2026-03-09T18:30:56.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm00\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:30:57.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:57 vm08 bash[17774]: audit 2026-03-09T18:30:57.097956+0000 mon.b (mon.2) 90 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:30:57.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:57 vm08 bash[17774]: audit 2026-03-09T18:30:57.169858+0000 mon.b (mon.2) 91 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:30:57.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:57 vm00 bash[22468]: audit 2026-03-09T18:30:57.097956+0000 mon.b (mon.2) 90 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:30:57.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:57 vm00 bash[22468]: audit 2026-03-09T18:30:57.169858+0000 mon.b (mon.2) 91 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:30:57.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:57 vm00 bash[17468]: audit 2026-03-09T18:30:57.097956+0000 mon.b (mon.2) 90 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:30:57.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:57 vm00 bash[17468]: audit 2026-03-09T18:30:57.169858+0000 mon.b (mon.2) 91 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:30:58.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:30:58 vm08 bash[17774]: cluster 2026-03-09T18:30:56.685427+0000 mgr.x (mgr.24751) 98 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:58.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:30:58 vm00 bash[22468]: cluster 2026-03-09T18:30:56.685427+0000 mgr.x (mgr.24751) 98 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:30:58.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:30:58 vm00 bash[17468]: cluster 2026-03-09T18:30:56.685427+0000 mgr.x (mgr.24751) 98 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:00.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:00 vm08 bash[17774]: cluster 2026-03-09T18:30:58.686191+0000 mgr.x (mgr.24751) 99 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:00 vm00 bash[22468]: cluster 2026-03-09T18:30:58.686191+0000 mgr.x (mgr.24751) 99 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:00.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:00 vm00 bash[17468]: cluster 2026-03-09T18:30:58.686191+0000 mgr.x (mgr.24751) 99 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:02 vm08 bash[17774]: cluster 2026-03-09T18:31:00.686810+0000 mgr.x (mgr.24751) 100 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:02 vm08 bash[17774]: audit 2026-03-09T18:31:01.191163+0000 mgr.x (mgr.24751) 101 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:02.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:02 vm00 bash[17468]: cluster 2026-03-09T18:31:00.686810+0000 mgr.x (mgr.24751) 100 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:02.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:02 vm00 bash[17468]: audit 2026-03-09T18:31:01.191163+0000 mgr.x (mgr.24751) 101 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:02.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:02 vm00 bash[22468]: cluster 2026-03-09T18:31:00.686810+0000 mgr.x (mgr.24751) 100 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:02.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:02 vm00 bash[22468]: audit 2026-03-09T18:31:01.191163+0000 mgr.x (mgr.24751) 101 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:03 vm08 bash[17774]: audit 2026-03-09T18:31:02.456533+0000 mon.a (mon.0) 845 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:31:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:03 vm08 bash[17774]: audit 2026-03-09T18:31:02.473107+0000 mon.a (mon.0) 846 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:31:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:03 vm08 bash[17774]: audit 2026-03-09T18:31:02.480579+0000 mon.a (mon.0) 847 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:31:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:03 vm08 bash[17774]: audit 2026-03-09T18:31:02.495234+0000 mon.a (mon.0) 848 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:31:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:03 vm08 bash[17774]: audit 2026-03-09T18:31:02.787333+0000 mon.b (mon.2) 92 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:31:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:03 vm08 bash[17774]: audit 2026-03-09T18:31:02.788201+0000 mon.b (mon.2) 93 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:31:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:03 vm08 bash[17774]: audit 2026-03-09T18:31:02.799417+0000 mon.a (mon.0) 849 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:31:03.725 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:31:03 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:31:03] "GET /metrics HTTP/1.1" 200 37524 "" "Prometheus/2.51.0" 2026-03-09T18:31:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:03 vm00 bash[22468]: audit 2026-03-09T18:31:02.456533+0000 mon.a (mon.0) 845 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:31:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:03 vm00 bash[22468]: audit 2026-03-09T18:31:02.473107+0000 mon.a (mon.0) 846 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:31:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:03 vm00 bash[22468]: audit 2026-03-09T18:31:02.480579+0000 mon.a (mon.0) 847 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:31:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:03 vm00 bash[22468]: audit 2026-03-09T18:31:02.495234+0000 mon.a (mon.0) 848 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:31:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:03 vm00 bash[22468]: audit 2026-03-09T18:31:02.787333+0000 mon.b (mon.2) 92 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:31:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:03 vm00 bash[22468]: audit 2026-03-09T18:31:02.788201+0000 mon.b (mon.2) 93 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:31:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:03 vm00 bash[22468]: audit 2026-03-09T18:31:02.799417+0000 mon.a (mon.0) 849 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:31:03.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:03 vm00 bash[17468]: audit 2026-03-09T18:31:02.456533+0000 mon.a (mon.0) 845 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:31:03.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:03 vm00 bash[17468]: audit 2026-03-09T18:31:02.473107+0000 mon.a (mon.0) 846 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:31:03.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:03 vm00 bash[17468]: audit 2026-03-09T18:31:02.480579+0000 mon.a (mon.0) 847 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:31:03.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:03 vm00 bash[17468]: audit 2026-03-09T18:31:02.495234+0000 mon.a (mon.0) 848 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:31:03.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:03 vm00 bash[17468]: audit 2026-03-09T18:31:02.787333+0000 mon.b (mon.2) 92 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:31:03.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:03 vm00 bash[17468]: audit 2026-03-09T18:31:02.788201+0000 mon.b (mon.2) 93 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:31:03.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:03 vm00 bash[17468]: audit 2026-03-09T18:31:02.799417+0000 mon.a (mon.0) 849 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:31:04.464 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:31:04 vm08 bash[38540]: ts=2026-03-09T18:31:04.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:31:04.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:04 vm08 bash[17774]: cluster 2026-03-09T18:31:02.687151+0000 mgr.x (mgr.24751) 102 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:04.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:04 vm00 bash[22468]: cluster 2026-03-09T18:31:02.687151+0000 mgr.x (mgr.24751) 102 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:04.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:04 vm00 bash[17468]: cluster 2026-03-09T18:31:02.687151+0000 mgr.x (mgr.24751) 102 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:06.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:06 vm08 bash[17774]: cluster 2026-03-09T18:31:04.687765+0000 mgr.x (mgr.24751) 103 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:06.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:06 vm00 bash[22468]: cluster 2026-03-09T18:31:04.687765+0000 mgr.x (mgr.24751) 103 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:06.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:06 vm00 bash[17468]: cluster 2026-03-09T18:31:04.687765+0000 mgr.x (mgr.24751) 103 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:07.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:31:06 vm08 bash[38540]: ts=2026-03-09T18:31:06.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm00\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:31:08.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:08 vm00 bash[22468]: cluster 2026-03-09T18:31:06.688073+0000 mgr.x (mgr.24751) 104 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:08.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:08 vm00 bash[17468]: cluster 2026-03-09T18:31:06.688073+0000 mgr.x (mgr.24751) 104 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:08.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:08 vm08 bash[17774]: cluster 2026-03-09T18:31:06.688073+0000 mgr.x (mgr.24751) 104 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:09.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:09 vm00 bash[22468]: cluster 2026-03-09T18:31:08.688567+0000 mgr.x (mgr.24751) 105 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:09.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:09 vm00 bash[17468]: cluster 2026-03-09T18:31:08.688567+0000 mgr.x (mgr.24751) 105 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:09.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:09 vm08 bash[17774]: cluster 2026-03-09T18:31:08.688567+0000 mgr.x (mgr.24751) 105 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:12 vm08 bash[17774]: cluster 2026-03-09T18:31:10.688953+0000 mgr.x (mgr.24751) 106 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:12 vm08 bash[17774]: audit 2026-03-09T18:31:11.198754+0000 mgr.x (mgr.24751) 107 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:12 vm08 bash[17774]: audit 2026-03-09T18:31:12.098236+0000 mon.b (mon.2) 94 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:31:12.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:12 vm00 bash[17468]: cluster 2026-03-09T18:31:10.688953+0000 mgr.x (mgr.24751) 106 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:12.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:12 vm00 bash[17468]: audit 2026-03-09T18:31:11.198754+0000 mgr.x (mgr.24751) 107 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:12.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:12 vm00 bash[17468]: audit 2026-03-09T18:31:12.098236+0000 mon.b (mon.2) 94 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:31:12.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:12 vm00 bash[22468]: cluster 2026-03-09T18:31:10.688953+0000 mgr.x (mgr.24751) 106 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:12.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:12 vm00 bash[22468]: audit 2026-03-09T18:31:11.198754+0000 mgr.x (mgr.24751) 107 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:12.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:12 vm00 bash[22468]: audit 2026-03-09T18:31:12.098236+0000 mon.b (mon.2) 94 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:31:13.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:31:13 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:31:13] "GET /metrics HTTP/1.1" 200 37524 "" "Prometheus/2.51.0" 2026-03-09T18:31:14.412 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:31:14 vm08 bash[38540]: ts=2026-03-09T18:31:14.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:31:14.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:14 vm08 bash[17774]: cluster 2026-03-09T18:31:12.689298+0000 mgr.x (mgr.24751) 108 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:14.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:14 vm00 bash[22468]: cluster 2026-03-09T18:31:12.689298+0000 mgr.x (mgr.24751) 108 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:14.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:14 vm00 bash[17468]: cluster 2026-03-09T18:31:12.689298+0000 mgr.x (mgr.24751) 108 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:16.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:16 vm08 bash[17774]: cluster 2026-03-09T18:31:14.689828+0000 mgr.x (mgr.24751) 109 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:16.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:16 vm00 bash[22468]: cluster 2026-03-09T18:31:14.689828+0000 mgr.x (mgr.24751) 109 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:16.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:16 vm00 bash[17468]: cluster 2026-03-09T18:31:14.689828+0000 mgr.x (mgr.24751) 109 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:17.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:31:16 vm08 bash[38540]: ts=2026-03-09T18:31:16.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm00\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:31:18.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:18 vm08 bash[17774]: cluster 2026-03-09T18:31:16.690143+0000 mgr.x (mgr.24751) 110 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:18.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:18 vm00 bash[22468]: cluster 2026-03-09T18:31:16.690143+0000 mgr.x (mgr.24751) 110 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:18.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:18 vm00 bash[17468]: cluster 2026-03-09T18:31:16.690143+0000 mgr.x (mgr.24751) 110 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:20.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:20 vm08 bash[17774]: cluster 2026-03-09T18:31:18.690702+0000 mgr.x (mgr.24751) 111 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:20.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:20 vm00 bash[22468]: cluster 2026-03-09T18:31:18.690702+0000 mgr.x (mgr.24751) 111 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:20.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:20 vm00 bash[17468]: cluster 2026-03-09T18:31:18.690702+0000 mgr.x (mgr.24751) 111 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:22.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:22 vm08 bash[17774]: cluster 2026-03-09T18:31:20.691065+0000 mgr.x (mgr.24751) 112 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:22.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:22 vm08 bash[17774]: audit 2026-03-09T18:31:21.208420+0000 mgr.x (mgr.24751) 113 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:22.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:22 vm00 bash[22468]: cluster 2026-03-09T18:31:20.691065+0000 mgr.x (mgr.24751) 112 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:22.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:22 vm00 bash[22468]: audit 2026-03-09T18:31:21.208420+0000 mgr.x (mgr.24751) 113 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:22.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:22 vm00 bash[17468]: cluster 2026-03-09T18:31:20.691065+0000 mgr.x (mgr.24751) 112 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:22.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:22 vm00 bash[17468]: audit 2026-03-09T18:31:21.208420+0000 mgr.x (mgr.24751) 113 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:23.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:31:23 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:31:23] "GET /metrics HTTP/1.1" 200 37524 "" "Prometheus/2.51.0" 2026-03-09T18:31:24.459 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:31:24 vm08 bash[38540]: ts=2026-03-09T18:31:24.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:31:24.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:24 vm08 bash[17774]: cluster 2026-03-09T18:31:22.691335+0000 mgr.x (mgr.24751) 114 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:24.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:24 vm00 bash[22468]: cluster 2026-03-09T18:31:22.691335+0000 mgr.x (mgr.24751) 114 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:24.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:24 vm00 bash[17468]: cluster 2026-03-09T18:31:22.691335+0000 mgr.x (mgr.24751) 114 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:26.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:26 vm08 bash[17774]: cluster 2026-03-09T18:31:24.691831+0000 mgr.x (mgr.24751) 115 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:26.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:26 vm00 bash[22468]: cluster 2026-03-09T18:31:24.691831+0000 mgr.x (mgr.24751) 115 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:26.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:26 vm00 bash[17468]: cluster 2026-03-09T18:31:24.691831+0000 mgr.x (mgr.24751) 115 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:27.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:31:26 vm08 bash[38540]: ts=2026-03-09T18:31:26.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm00\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:31:27.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:27 vm08 bash[17774]: audit 2026-03-09T18:31:27.102509+0000 mon.b (mon.2) 95 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:31:27.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:27 vm00 bash[22468]: audit 2026-03-09T18:31:27.102509+0000 mon.b (mon.2) 95 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:31:27.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:27 vm00 bash[17468]: audit 2026-03-09T18:31:27.102509+0000 mon.b (mon.2) 95 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:31:28.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:28 vm08 bash[17774]: cluster 2026-03-09T18:31:26.692091+0000 mgr.x (mgr.24751) 116 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:28.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:28 vm00 bash[22468]: cluster 2026-03-09T18:31:26.692091+0000 mgr.x (mgr.24751) 116 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:28.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:28 vm00 bash[17468]: cluster 2026-03-09T18:31:26.692091+0000 mgr.x (mgr.24751) 116 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:29.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:29 vm00 bash[22468]: cluster 2026-03-09T18:31:28.692629+0000 mgr.x (mgr.24751) 117 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:29.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:29 vm00 bash[17468]: cluster 2026-03-09T18:31:28.692629+0000 mgr.x (mgr.24751) 117 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:29.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:29 vm08 bash[17774]: cluster 2026-03-09T18:31:28.692629+0000 mgr.x (mgr.24751) 117 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:32.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:32 vm08 bash[17774]: cluster 2026-03-09T18:31:30.692938+0000 mgr.x (mgr.24751) 118 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:32.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:32 vm08 bash[17774]: audit 2026-03-09T18:31:31.216259+0000 mgr.x (mgr.24751) 119 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:32.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:32 vm00 bash[22468]: cluster 2026-03-09T18:31:30.692938+0000 mgr.x (mgr.24751) 118 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:32.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:32 vm00 bash[22468]: audit 2026-03-09T18:31:31.216259+0000 mgr.x (mgr.24751) 119 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:32.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:32 vm00 bash[17468]: cluster 2026-03-09T18:31:30.692938+0000 mgr.x (mgr.24751) 118 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:32.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:32 vm00 bash[17468]: audit 2026-03-09T18:31:31.216259+0000 mgr.x (mgr.24751) 119 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:33.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:31:33 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:31:33] "GET /metrics HTTP/1.1" 200 37525 "" "Prometheus/2.51.0" 2026-03-09T18:31:34.428 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:31:34 vm08 bash[38540]: ts=2026-03-09T18:31:34.148Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:31:34.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:34 vm08 bash[17774]: cluster 2026-03-09T18:31:32.693243+0000 mgr.x (mgr.24751) 120 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:34.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:34 vm00 bash[22468]: cluster 2026-03-09T18:31:32.693243+0000 mgr.x (mgr.24751) 120 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:34.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:34 vm00 bash[17468]: cluster 2026-03-09T18:31:32.693243+0000 mgr.x (mgr.24751) 120 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:36.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:36 vm08 bash[17774]: cluster 2026-03-09T18:31:34.693769+0000 mgr.x (mgr.24751) 121 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:36.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:36 vm00 bash[22468]: cluster 2026-03-09T18:31:34.693769+0000 mgr.x (mgr.24751) 121 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:36.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:36 vm00 bash[17468]: cluster 2026-03-09T18:31:34.693769+0000 mgr.x (mgr.24751) 121 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:37.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:31:36 vm08 bash[38540]: ts=2026-03-09T18:31:36.949Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm00\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:31:38.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:38 vm08 bash[17774]: cluster 2026-03-09T18:31:36.694080+0000 mgr.x (mgr.24751) 122 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:38.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:38 vm00 bash[22468]: cluster 2026-03-09T18:31:36.694080+0000 mgr.x (mgr.24751) 122 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:38.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:38 vm00 bash[17468]: cluster 2026-03-09T18:31:36.694080+0000 mgr.x (mgr.24751) 122 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:40.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:40 vm08 bash[17774]: cluster 2026-03-09T18:31:38.694601+0000 mgr.x (mgr.24751) 123 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:40.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:40 vm00 bash[22468]: cluster 2026-03-09T18:31:38.694601+0000 mgr.x (mgr.24751) 123 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:40.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:40 vm00 bash[17468]: cluster 2026-03-09T18:31:38.694601+0000 mgr.x (mgr.24751) 123 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:42.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:42 vm08 bash[17774]: cluster 2026-03-09T18:31:40.694934+0000 mgr.x (mgr.24751) 124 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:42.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:42 vm08 bash[17774]: audit 2026-03-09T18:31:41.226457+0000 mgr.x (mgr.24751) 125 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:42.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:42 vm08 bash[17774]: audit 2026-03-09T18:31:42.102130+0000 mon.b (mon.2) 96 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:31:42.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:42 vm00 bash[22468]: cluster 2026-03-09T18:31:40.694934+0000 mgr.x (mgr.24751) 124 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:42.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:42 vm00 bash[22468]: audit 2026-03-09T18:31:41.226457+0000 mgr.x (mgr.24751) 125 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:42.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:42 vm00 bash[22468]: audit 2026-03-09T18:31:42.102130+0000 mon.b (mon.2) 96 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:31:42.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:42 vm00 bash[17468]: cluster 2026-03-09T18:31:40.694934+0000 mgr.x (mgr.24751) 124 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:42.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:42 vm00 bash[17468]: audit 2026-03-09T18:31:41.226457+0000 mgr.x (mgr.24751) 125 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:42.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:42 vm00 bash[17468]: audit 2026-03-09T18:31:42.102130+0000 mon.b (mon.2) 96 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:31:43.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:31:43 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:31:43] "GET /metrics HTTP/1.1" 200 37534 "" "Prometheus/2.51.0" 2026-03-09T18:31:44.454 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:31:44 vm08 bash[38540]: ts=2026-03-09T18:31:44.148Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:31:44.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:44 vm08 bash[17774]: cluster 2026-03-09T18:31:42.695235+0000 mgr.x (mgr.24751) 126 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:44.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:44 vm00 bash[22468]: cluster 2026-03-09T18:31:42.695235+0000 mgr.x (mgr.24751) 126 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:44.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:44 vm00 bash[17468]: cluster 2026-03-09T18:31:42.695235+0000 mgr.x (mgr.24751) 126 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:46 vm08 bash[17774]: cluster 2026-03-09T18:31:44.695798+0000 mgr.x (mgr.24751) 127 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:46.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:46 vm00 bash[22468]: cluster 2026-03-09T18:31:44.695798+0000 mgr.x (mgr.24751) 127 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:46.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:46 vm00 bash[17468]: cluster 2026-03-09T18:31:44.695798+0000 mgr.x (mgr.24751) 127 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:47.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:31:46 vm08 bash[38540]: ts=2026-03-09T18:31:46.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm00\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:31:48.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:48 vm00 bash[17468]: cluster 2026-03-09T18:31:46.696141+0000 mgr.x (mgr.24751) 128 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:48.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:48 vm00 bash[22468]: cluster 2026-03-09T18:31:46.696141+0000 mgr.x (mgr.24751) 128 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:48.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:48 vm08 bash[17774]: cluster 2026-03-09T18:31:46.696141+0000 mgr.x (mgr.24751) 128 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:49.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:49 vm00 bash[22468]: cluster 2026-03-09T18:31:48.696648+0000 mgr.x (mgr.24751) 129 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:49.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:49 vm00 bash[17468]: cluster 2026-03-09T18:31:48.696648+0000 mgr.x (mgr.24751) 129 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:49.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:49 vm08 bash[17774]: cluster 2026-03-09T18:31:48.696648+0000 mgr.x (mgr.24751) 129 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:52.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:52 vm08 bash[17774]: cluster 2026-03-09T18:31:50.696950+0000 mgr.x (mgr.24751) 130 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:52.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:52 vm08 bash[17774]: audit 2026-03-09T18:31:51.235001+0000 mgr.x (mgr.24751) 131 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:52.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:52 vm00 bash[22468]: cluster 2026-03-09T18:31:50.696950+0000 mgr.x (mgr.24751) 130 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:52.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:52 vm00 bash[22468]: audit 2026-03-09T18:31:51.235001+0000 mgr.x (mgr.24751) 131 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:52.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:52 vm00 bash[17468]: cluster 2026-03-09T18:31:50.696950+0000 mgr.x (mgr.24751) 130 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:52.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:52 vm00 bash[17468]: audit 2026-03-09T18:31:51.235001+0000 mgr.x (mgr.24751) 131 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:31:53.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:31:53 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:31:53] "GET /metrics HTTP/1.1" 200 37534 "" "Prometheus/2.51.0" 2026-03-09T18:31:54.448 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:31:54 vm08 bash[38540]: ts=2026-03-09T18:31:54.148Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:31:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:54 vm08 bash[17774]: cluster 2026-03-09T18:31:52.697349+0000 mgr.x (mgr.24751) 132 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:54.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:54 vm00 bash[22468]: cluster 2026-03-09T18:31:52.697349+0000 mgr.x (mgr.24751) 132 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:54.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:54 vm00 bash[17468]: cluster 2026-03-09T18:31:52.697349+0000 mgr.x (mgr.24751) 132 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:56.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:56 vm08 bash[17774]: cluster 2026-03-09T18:31:54.697892+0000 mgr.x (mgr.24751) 133 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:56.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:56 vm00 bash[22468]: cluster 2026-03-09T18:31:54.697892+0000 mgr.x (mgr.24751) 133 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:56.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:56 vm00 bash[17468]: cluster 2026-03-09T18:31:54.697892+0000 mgr.x (mgr.24751) 133 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:31:57.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:31:56 vm08 bash[38540]: ts=2026-03-09T18:31:56.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm00\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:31:57.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:57 vm08 bash[17774]: audit 2026-03-09T18:31:57.102395+0000 mon.b (mon.2) 97 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:31:57.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:57 vm00 bash[22468]: audit 2026-03-09T18:31:57.102395+0000 mon.b (mon.2) 97 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:31:57.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:57 vm00 bash[17468]: audit 2026-03-09T18:31:57.102395+0000 mon.b (mon.2) 97 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:31:58.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:31:58 vm08 bash[17774]: cluster 2026-03-09T18:31:56.698206+0000 mgr.x (mgr.24751) 134 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:58.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:31:58 vm00 bash[22468]: cluster 2026-03-09T18:31:56.698206+0000 mgr.x (mgr.24751) 134 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:31:58.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:31:58 vm00 bash[17468]: cluster 2026-03-09T18:31:56.698206+0000 mgr.x (mgr.24751) 134 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:00.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:00 vm00 bash[22468]: cluster 2026-03-09T18:31:58.698804+0000 mgr.x (mgr.24751) 135 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:00.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:00 vm00 bash[17468]: cluster 2026-03-09T18:31:58.698804+0000 mgr.x (mgr.24751) 135 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:00.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:00 vm08 bash[17774]: cluster 2026-03-09T18:31:58.698804+0000 mgr.x (mgr.24751) 135 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:02.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:02 vm00 bash[22468]: cluster 2026-03-09T18:32:00.699152+0000 mgr.x (mgr.24751) 136 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:02.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:02 vm00 bash[22468]: audit 2026-03-09T18:32:01.242459+0000 mgr.x (mgr.24751) 137 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:02.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:02 vm00 bash[17468]: cluster 2026-03-09T18:32:00.699152+0000 mgr.x (mgr.24751) 136 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:02.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:02 vm00 bash[17468]: audit 2026-03-09T18:32:01.242459+0000 mgr.x (mgr.24751) 137 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:02.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:02 vm08 bash[17774]: cluster 2026-03-09T18:32:00.699152+0000 mgr.x (mgr.24751) 136 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:02.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:02 vm08 bash[17774]: audit 2026-03-09T18:32:01.242459+0000 mgr.x (mgr.24751) 137 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:03 vm08 bash[17774]: cluster 2026-03-09T18:32:02.699502+0000 mgr.x (mgr.24751) 138 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:03 vm08 bash[17774]: audit 2026-03-09T18:32:02.836023+0000 mon.b (mon.2) 98 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:32:03.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:32:03 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:32:03] "GET /metrics HTTP/1.1" 200 37532 "" "Prometheus/2.51.0" 2026-03-09T18:32:03.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:03 vm00 bash[22468]: cluster 2026-03-09T18:32:02.699502+0000 mgr.x (mgr.24751) 138 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:03.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:03 vm00 bash[22468]: audit 2026-03-09T18:32:02.836023+0000 mon.b (mon.2) 98 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:32:03.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:03 vm00 bash[17468]: cluster 2026-03-09T18:32:02.699502+0000 mgr.x (mgr.24751) 138 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:03.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:03 vm00 bash[17468]: audit 2026-03-09T18:32:02.836023+0000 mon.b (mon.2) 98 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:32:04.474 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:32:04 vm08 bash[38540]: ts=2026-03-09T18:32:04.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:32:06.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:05 vm00 bash[22468]: cluster 2026-03-09T18:32:04.700051+0000 mgr.x (mgr.24751) 139 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:06.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:05 vm00 bash[17468]: cluster 2026-03-09T18:32:04.700051+0000 mgr.x (mgr.24751) 139 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:06.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:05 vm08 bash[17774]: cluster 2026-03-09T18:32:04.700051+0000 mgr.x (mgr.24751) 139 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:07.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:32:06 vm08 bash[38540]: ts=2026-03-09T18:32:06.949Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm00\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:32:08.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:08 vm08 bash[17774]: cluster 2026-03-09T18:32:06.700385+0000 mgr.x (mgr.24751) 140 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:08 vm08 bash[17774]: audit 2026-03-09T18:32:08.162426+0000 mon.a (mon.0) 850 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:08 vm08 bash[17774]: audit 2026-03-09T18:32:08.169976+0000 mon.a (mon.0) 851 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:08 vm08 bash[17774]: audit 2026-03-09T18:32:08.181919+0000 mon.a (mon.0) 852 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:08 vm08 bash[17774]: audit 2026-03-09T18:32:08.189177+0000 mon.a (mon.0) 853 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:08.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:08 vm00 bash[22468]: cluster 2026-03-09T18:32:06.700385+0000 mgr.x (mgr.24751) 140 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:08.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:08 vm00 bash[22468]: audit 2026-03-09T18:32:08.162426+0000 mon.a (mon.0) 850 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:08.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:08 vm00 bash[22468]: audit 2026-03-09T18:32:08.169976+0000 mon.a (mon.0) 851 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:08.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:08 vm00 bash[22468]: audit 2026-03-09T18:32:08.181919+0000 mon.a (mon.0) 852 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:08.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:08 vm00 bash[22468]: audit 2026-03-09T18:32:08.189177+0000 mon.a (mon.0) 853 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:08.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:08 vm00 bash[17468]: cluster 2026-03-09T18:32:06.700385+0000 mgr.x (mgr.24751) 140 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:08.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:08 vm00 bash[17468]: audit 2026-03-09T18:32:08.162426+0000 mon.a (mon.0) 850 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:08.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:08 vm00 bash[17468]: audit 2026-03-09T18:32:08.169976+0000 mon.a (mon.0) 851 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:08.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:08 vm00 bash[17468]: audit 2026-03-09T18:32:08.181919+0000 mon.a (mon.0) 852 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:08.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:08 vm00 bash[17468]: audit 2026-03-09T18:32:08.189177+0000 mon.a (mon.0) 853 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:09.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:09 vm08 bash[17774]: audit 2026-03-09T18:32:08.487752+0000 mon.b (mon.2) 99 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:32:09.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:09 vm08 bash[17774]: audit 2026-03-09T18:32:08.488651+0000 mon.b (mon.2) 100 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:32:09.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:09 vm08 bash[17774]: audit 2026-03-09T18:32:08.497813+0000 mon.a (mon.0) 854 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:09.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:09 vm00 bash[22468]: audit 2026-03-09T18:32:08.487752+0000 mon.b (mon.2) 99 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:32:09.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:09 vm00 bash[22468]: audit 2026-03-09T18:32:08.488651+0000 mon.b (mon.2) 100 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:32:09.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:09 vm00 bash[22468]: audit 2026-03-09T18:32:08.497813+0000 mon.a (mon.0) 854 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:09.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:09 vm00 bash[17468]: audit 2026-03-09T18:32:08.487752+0000 mon.b (mon.2) 99 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:32:09.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:09 vm00 bash[17468]: audit 2026-03-09T18:32:08.488651+0000 mon.b (mon.2) 100 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:32:09.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:09 vm00 bash[17468]: audit 2026-03-09T18:32:08.497813+0000 mon.a (mon.0) 854 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:10.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:10 vm08 bash[17774]: cluster 2026-03-09T18:32:08.700893+0000 mgr.x (mgr.24751) 141 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:10.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:10 vm00 bash[22468]: cluster 2026-03-09T18:32:08.700893+0000 mgr.x (mgr.24751) 141 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:10.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:10 vm00 bash[17468]: cluster 2026-03-09T18:32:08.700893+0000 mgr.x (mgr.24751) 141 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:12.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:12 vm00 bash[17468]: cluster 2026-03-09T18:32:10.701200+0000 mgr.x (mgr.24751) 142 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:12.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:12 vm00 bash[17468]: audit 2026-03-09T18:32:11.252697+0000 mgr.x (mgr.24751) 143 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:12.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:12 vm00 bash[17468]: audit 2026-03-09T18:32:12.102496+0000 mon.b (mon.2) 101 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:32:12.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:12 vm00 bash[22468]: cluster 2026-03-09T18:32:10.701200+0000 mgr.x (mgr.24751) 142 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:12.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:12 vm00 bash[22468]: audit 2026-03-09T18:32:11.252697+0000 mgr.x (mgr.24751) 143 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:12.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:12 vm00 bash[22468]: audit 2026-03-09T18:32:12.102496+0000 mon.b (mon.2) 101 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:32:12.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:12 vm08 bash[17774]: cluster 2026-03-09T18:32:10.701200+0000 mgr.x (mgr.24751) 142 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:12.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:12 vm08 bash[17774]: audit 2026-03-09T18:32:11.252697+0000 mgr.x (mgr.24751) 143 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:12.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:12 vm08 bash[17774]: audit 2026-03-09T18:32:12.102496+0000 mon.b (mon.2) 101 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:32:13.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:13 vm08 bash[17774]: cluster 2026-03-09T18:32:12.701701+0000 mgr.x (mgr.24751) 144 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:13.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:32:13 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:32:13] "GET /metrics HTTP/1.1" 200 37528 "" "Prometheus/2.51.0" 2026-03-09T18:32:13.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:13 vm00 bash[17468]: cluster 2026-03-09T18:32:12.701701+0000 mgr.x (mgr.24751) 144 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:13.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:13 vm00 bash[22468]: cluster 2026-03-09T18:32:12.701701+0000 mgr.x (mgr.24751) 144 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:14.474 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:32:14 vm08 bash[38540]: ts=2026-03-09T18:32:14.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:32:16.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:15 vm00 bash[17468]: cluster 2026-03-09T18:32:14.702261+0000 mgr.x (mgr.24751) 145 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:16.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:15 vm00 bash[22468]: cluster 2026-03-09T18:32:14.702261+0000 mgr.x (mgr.24751) 145 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:16.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:15 vm08 bash[17774]: cluster 2026-03-09T18:32:14.702261+0000 mgr.x (mgr.24751) 145 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:17.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:32:16 vm08 bash[38540]: ts=2026-03-09T18:32:16.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm00\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:32:18.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:18 vm08 bash[17774]: cluster 2026-03-09T18:32:16.702625+0000 mgr.x (mgr.24751) 146 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:18.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:18 vm00 bash[22468]: cluster 2026-03-09T18:32:16.702625+0000 mgr.x (mgr.24751) 146 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:18.892 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:18 vm00 bash[17468]: cluster 2026-03-09T18:32:16.702625+0000 mgr.x (mgr.24751) 146 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:20.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:20 vm08 bash[17774]: cluster 2026-03-09T18:32:18.703143+0000 mgr.x (mgr.24751) 147 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:20.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:20 vm00 bash[17468]: cluster 2026-03-09T18:32:18.703143+0000 mgr.x (mgr.24751) 147 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:20.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:20 vm00 bash[22468]: cluster 2026-03-09T18:32:18.703143+0000 mgr.x (mgr.24751) 147 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:22.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:22 vm08 bash[17774]: cluster 2026-03-09T18:32:20.703491+0000 mgr.x (mgr.24751) 148 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:22.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:22 vm08 bash[17774]: audit 2026-03-09T18:32:21.262674+0000 mgr.x (mgr.24751) 149 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:22.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:22 vm00 bash[17468]: cluster 2026-03-09T18:32:20.703491+0000 mgr.x (mgr.24751) 148 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:22.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:22 vm00 bash[17468]: audit 2026-03-09T18:32:21.262674+0000 mgr.x (mgr.24751) 149 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:22.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:22 vm00 bash[22468]: cluster 2026-03-09T18:32:20.703491+0000 mgr.x (mgr.24751) 148 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:22.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:22 vm00 bash[22468]: audit 2026-03-09T18:32:21.262674+0000 mgr.x (mgr.24751) 149 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:23.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:32:23 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:32:23] "GET /metrics HTTP/1.1" 200 37528 "" "Prometheus/2.51.0" 2026-03-09T18:32:24.472 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:32:24 vm08 bash[38540]: ts=2026-03-09T18:32:24.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:32:24.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:24 vm08 bash[17774]: cluster 2026-03-09T18:32:22.703810+0000 mgr.x (mgr.24751) 150 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:24.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:24 vm00 bash[17468]: cluster 2026-03-09T18:32:22.703810+0000 mgr.x (mgr.24751) 150 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:24.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:24 vm00 bash[22468]: cluster 2026-03-09T18:32:22.703810+0000 mgr.x (mgr.24751) 150 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:26.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:26 vm00 bash[17468]: cluster 2026-03-09T18:32:24.704300+0000 mgr.x (mgr.24751) 151 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:26.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:26 vm00 bash[22468]: cluster 2026-03-09T18:32:24.704300+0000 mgr.x (mgr.24751) 151 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:26.948 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:26 vm08 bash[17774]: cluster 2026-03-09T18:32:24.704300+0000 mgr.x (mgr.24751) 151 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:27.129 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch daemon redeploy "mgr.$(ceph mgr dump -f json | jq .standbys | jq .[] | jq -r .name)" --image quay.ceph.io/ceph-ci/ceph:$sha1' 2026-03-09T18:32:27.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:32:26 vm08 bash[38540]: ts=2026-03-09T18:32:26.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm00\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:32:27.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:27 vm00 bash[22468]: audit 2026-03-09T18:32:27.107302+0000 mon.b (mon.2) 102 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:32:27.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:27 vm00 bash[17468]: audit 2026-03-09T18:32:27.107302+0000 mon.b (mon.2) 102 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:32:27.834 INFO:teuthology.orchestra.run.vm00.stdout:Scheduled to redeploy mgr.y on host 'vm00' 2026-03-09T18:32:27.899 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps --refresh' 2026-03-09T18:32:27.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:27 vm08 bash[17774]: audit 2026-03-09T18:32:27.107302+0000 mon.b (mon.2) 102 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:32:28.370 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:32:28.370 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (2m) 20s ago 9m 15.8M - 0.25.0 c8568f914cd2 2a8a29ecee54 2026-03-09T18:32:28.370 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (2m) 20s ago 9m 37.5M - dad864ee21e9 b6a0baf6efb9 2026-03-09T18:32:28.370 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (2m) 20s ago 9m 42.3M - 3.5 e1d6a67b021e 68f4fe5b96ee 2026-03-09T18:32:28.370 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443,9283 running (4m) 20s ago 12m 532M - 19.2.3-678-ge911bdeb 654f31e6858e c24396cb6839 2026-03-09T18:32:28.370 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:9283 running (13m) 20s ago 13m 401M - 17.2.0 e1d6a67b021e 67bec09a4a4c 2026-03-09T18:32:28.370 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (13m) 20s ago 13m 55.4M 2048M 17.2.0 e1d6a67b021e 819e8890799a 2026-03-09T18:32:28.370 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (12m) 20s ago 12m 42.2M 2048M 17.2.0 e1d6a67b021e 5b51a6d0bbdd 2026-03-09T18:32:28.370 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (12m) 20s ago 12m 42.0M 2048M 17.2.0 e1d6a67b021e a82073bc5d9c 2026-03-09T18:32:28.370 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (2m) 20s ago 9m 7463k - 1.7.0 72c9c2088986 c2e3e3202fde 2026-03-09T18:32:28.370 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (2m) 20s ago 9m 7468k - 1.7.0 72c9c2088986 7a7d1ed8c801 2026-03-09T18:32:28.370 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (12m) 20s ago 12m 49.5M 4096M 17.2.0 e1d6a67b021e ab692a994bc3 2026-03-09T18:32:28.370 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (11m) 20s ago 11m 51.9M 4096M 17.2.0 e1d6a67b021e d8607e6df9d6 2026-03-09T18:32:28.370 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (11m) 20s ago 11m 46.6M 4096M 17.2.0 e1d6a67b021e 35e072ab4c22 2026-03-09T18:32:28.370 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (11m) 20s ago 11m 51.1M 4096M 17.2.0 e1d6a67b021e 306d680cc55b 2026-03-09T18:32:28.370 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (11m) 20s ago 11m 49.5M 4096M 17.2.0 e1d6a67b021e 781045b06a16 2026-03-09T18:32:28.370 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (10m) 20s ago 10m 49.7M 4096M 17.2.0 e1d6a67b021e 28b71e1b7c1b 2026-03-09T18:32:28.370 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (10m) 20s ago 10m 48.2M 4096M 17.2.0 e1d6a67b021e 80e1a58dd2f5 2026-03-09T18:32:28.370 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (10m) 20s ago 10m 48.5M 4096M 17.2.0 e1d6a67b021e 4f91765b51cf 2026-03-09T18:32:28.370 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (2m) 20s ago 9m 40.0M - 2.51.0 1d3b7f56885b 64bf8fcd1d5c 2026-03-09T18:32:28.370 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (9m) 20s ago 9m 84.7M - 17.2.0 e1d6a67b021e 671fa80b7e00 2026-03-09T18:32:28.371 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (9m) 20s ago 9m 85.1M - 17.2.0 e1d6a67b021e 1fbcce983317 2026-03-09T18:32:28.462 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'sleep 180' 2026-03-09T18:32:28.701 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:28 vm00 bash[22468]: cluster 2026-03-09T18:32:26.704655+0000 mgr.x (mgr.24751) 152 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:28.701 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:28 vm00 bash[22468]: audit 2026-03-09T18:32:27.607969+0000 mon.c (mon.1) 135 : audit [DBG] from='client.? 192.168.123.100:0/2144476984' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T18:32:28.702 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:28 vm00 bash[22468]: audit 2026-03-09T18:32:27.811955+0000 mgr.x (mgr.24751) 153 : audit [DBG] from='client.24800 -' entity='client.admin' cmd=[{"prefix": "orch daemon redeploy", "name": "mgr.y", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:32:28.702 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:28 vm00 bash[22468]: cephadm 2026-03-09T18:32:27.820511+0000 mgr.x (mgr.24751) 154 : cephadm [INF] Schedule redeploy daemon mgr.y 2026-03-09T18:32:28.702 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:28 vm00 bash[22468]: audit 2026-03-09T18:32:27.821880+0000 mon.a (mon.0) 855 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:28.702 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:28 vm00 bash[22468]: audit 2026-03-09T18:32:27.829256+0000 mon.a (mon.0) 856 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:28.702 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:28 vm00 bash[22468]: audit 2026-03-09T18:32:27.834934+0000 mon.b (mon.2) 103 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:32:28.702 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:28 vm00 bash[22468]: audit 2026-03-09T18:32:27.836649+0000 mon.a (mon.0) 857 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:28.702 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:28 vm00 bash[22468]: audit 2026-03-09T18:32:28.366520+0000 mgr.x (mgr.24751) 155 : audit [DBG] from='client.15003 -' entity='client.admin' cmd=[{"prefix": "orch ps", "refresh": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:32:28.702 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:28 vm00 bash[17468]: cluster 2026-03-09T18:32:26.704655+0000 mgr.x (mgr.24751) 152 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:28.702 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:28 vm00 bash[17468]: audit 2026-03-09T18:32:27.607969+0000 mon.c (mon.1) 135 : audit [DBG] from='client.? 192.168.123.100:0/2144476984' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T18:32:28.702 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:28 vm00 bash[17468]: audit 2026-03-09T18:32:27.811955+0000 mgr.x (mgr.24751) 153 : audit [DBG] from='client.24800 -' entity='client.admin' cmd=[{"prefix": "orch daemon redeploy", "name": "mgr.y", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:32:28.702 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:28 vm00 bash[17468]: cephadm 2026-03-09T18:32:27.820511+0000 mgr.x (mgr.24751) 154 : cephadm [INF] Schedule redeploy daemon mgr.y 2026-03-09T18:32:28.702 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:28 vm00 bash[17468]: audit 2026-03-09T18:32:27.821880+0000 mon.a (mon.0) 855 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:28.702 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:28 vm00 bash[17468]: audit 2026-03-09T18:32:27.829256+0000 mon.a (mon.0) 856 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:28.702 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:28 vm00 bash[17468]: audit 2026-03-09T18:32:27.834934+0000 mon.b (mon.2) 103 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:32:28.702 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:28 vm00 bash[17468]: audit 2026-03-09T18:32:27.836649+0000 mon.a (mon.0) 857 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:28.702 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:28 vm00 bash[17468]: audit 2026-03-09T18:32:28.366520+0000 mgr.x (mgr.24751) 155 : audit [DBG] from='client.15003 -' entity='client.admin' cmd=[{"prefix": "orch ps", "refresh": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:32:28.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:28 vm08 bash[17774]: cluster 2026-03-09T18:32:26.704655+0000 mgr.x (mgr.24751) 152 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:28.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:28 vm08 bash[17774]: audit 2026-03-09T18:32:27.607969+0000 mon.c (mon.1) 135 : audit [DBG] from='client.? 192.168.123.100:0/2144476984' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T18:32:28.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:28 vm08 bash[17774]: audit 2026-03-09T18:32:27.811955+0000 mgr.x (mgr.24751) 153 : audit [DBG] from='client.24800 -' entity='client.admin' cmd=[{"prefix": "orch daemon redeploy", "name": "mgr.y", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:32:28.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:28 vm08 bash[17774]: cephadm 2026-03-09T18:32:27.820511+0000 mgr.x (mgr.24751) 154 : cephadm [INF] Schedule redeploy daemon mgr.y 2026-03-09T18:32:28.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:28 vm08 bash[17774]: audit 2026-03-09T18:32:27.821880+0000 mon.a (mon.0) 855 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:28.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:28 vm08 bash[17774]: audit 2026-03-09T18:32:27.829256+0000 mon.a (mon.0) 856 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:28.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:28 vm08 bash[17774]: audit 2026-03-09T18:32:27.834934+0000 mon.b (mon.2) 103 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:32:28.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:28 vm08 bash[17774]: audit 2026-03-09T18:32:27.836649+0000 mon.a (mon.0) 857 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:28.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:28 vm08 bash[17774]: audit 2026-03-09T18:32:28.366520+0000 mgr.x (mgr.24751) 155 : audit [DBG] from='client.15003 -' entity='client.admin' cmd=[{"prefix": "orch ps", "refresh": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:32:30.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:30 vm00 bash[22468]: cluster 2026-03-09T18:32:28.705187+0000 mgr.x (mgr.24751) 156 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:30.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:30 vm00 bash[22468]: audit 2026-03-09T18:32:29.379059+0000 mon.a (mon.0) 858 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:30.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:30 vm00 bash[17468]: cluster 2026-03-09T18:32:28.705187+0000 mgr.x (mgr.24751) 156 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:30.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:30 vm00 bash[17468]: audit 2026-03-09T18:32:29.379059+0000 mon.a (mon.0) 858 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:30.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:30 vm08 bash[17774]: cluster 2026-03-09T18:32:28.705187+0000 mgr.x (mgr.24751) 156 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:30.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:30 vm08 bash[17774]: audit 2026-03-09T18:32:29.379059+0000 mon.a (mon.0) 858 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:32.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:32 vm08 bash[17774]: cluster 2026-03-09T18:32:30.705566+0000 mgr.x (mgr.24751) 157 : cluster [DBG] pgmap v102: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:32.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:32 vm08 bash[17774]: audit 2026-03-09T18:32:31.270039+0000 mgr.x (mgr.24751) 158 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:32.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:32 vm00 bash[22468]: cluster 2026-03-09T18:32:30.705566+0000 mgr.x (mgr.24751) 157 : cluster [DBG] pgmap v102: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:32.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:32 vm00 bash[22468]: audit 2026-03-09T18:32:31.270039+0000 mgr.x (mgr.24751) 158 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:32.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:32 vm00 bash[17468]: cluster 2026-03-09T18:32:30.705566+0000 mgr.x (mgr.24751) 157 : cluster [DBG] pgmap v102: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:32.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:32 vm00 bash[17468]: audit 2026-03-09T18:32:31.270039+0000 mgr.x (mgr.24751) 158 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:33.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:32:33 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:32:33] "GET /metrics HTTP/1.1" 200 37526 "" "Prometheus/2.51.0" 2026-03-09T18:32:34.469 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:32:34 vm08 bash[38540]: ts=2026-03-09T18:32:34.148Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:32:34.469 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:34 vm08 bash[17774]: cluster 2026-03-09T18:32:32.705914+0000 mgr.x (mgr.24751) 159 : cluster [DBG] pgmap v103: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:34.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:34 vm00 bash[17468]: cluster 2026-03-09T18:32:32.705914+0000 mgr.x (mgr.24751) 159 : cluster [DBG] pgmap v103: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:34.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:34 vm00 bash[22468]: cluster 2026-03-09T18:32:32.705914+0000 mgr.x (mgr.24751) 159 : cluster [DBG] pgmap v103: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:35.293 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:35 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:32:35.294 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:35 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:32:35.294 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:35 vm00 systemd[1]: Stopping Ceph mgr.y for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:32:35.294 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:35 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:32:35.294 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:32:35 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:32:35.294 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:32:35 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:32:35.294 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:32:35 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:32:35.294 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:32:35 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:32:35.295 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:32:35 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:32:35.295 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:32:35 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:32:35.549 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:35 vm00 bash[53864]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-mgr-y 2026-03-09T18:32:35.549 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:35 vm00 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mgr.y.service: Main process exited, code=exited, status=143/n/a 2026-03-09T18:32:35.549 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:35 vm00 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mgr.y.service: Failed with result 'exit-code'. 2026-03-09T18:32:35.549 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:35 vm00 systemd[1]: Stopped Ceph mgr.y for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:32:35.817 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:35 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:32:35.817 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:35 vm00 bash[17468]: audit 2026-03-09T18:32:34.679654+0000 mon.a (mon.0) 859 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:35.817 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:35 vm00 bash[17468]: audit 2026-03-09T18:32:34.689218+0000 mon.a (mon.0) 860 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:35.817 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:35 vm00 bash[17468]: cluster 2026-03-09T18:32:34.706367+0000 mgr.x (mgr.24751) 160 : cluster [DBG] pgmap v104: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:35.817 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:35 vm00 bash[17468]: audit 2026-03-09T18:32:34.713785+0000 mon.a (mon.0) 861 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:35.817 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:35 vm00 bash[17468]: audit 2026-03-09T18:32:34.720500+0000 mon.b (mon.2) 104 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:32:35.817 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:35 vm00 bash[17468]: audit 2026-03-09T18:32:34.721397+0000 mon.b (mon.2) 105 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:32:35.817 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:35 vm00 bash[17468]: audit 2026-03-09T18:32:34.722635+0000 mon.a (mon.0) 862 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:35.817 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:35 vm00 bash[17468]: audit 2026-03-09T18:32:34.731404+0000 mon.a (mon.0) 863 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:35.817 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:35 vm00 bash[17468]: audit 2026-03-09T18:32:34.742820+0000 mon.b (mon.2) 106 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:32:35.817 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:35 vm00 bash[17468]: audit 2026-03-09T18:32:34.744313+0000 mon.b (mon.2) 107 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:32:35.817 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:35 vm00 bash[17468]: audit 2026-03-09T18:32:34.745154+0000 mon.b (mon.2) 108 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:32:35.818 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:35 vm00 bash[17468]: cephadm 2026-03-09T18:32:34.745862+0000 mgr.x (mgr.24751) 161 : cephadm [INF] Deploying daemon mgr.y on vm00 2026-03-09T18:32:35.818 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:35 vm00 bash[17468]: audit 2026-03-09T18:32:34.747067+0000 mon.a (mon.0) 864 : audit [INF] from='mgr.24751 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:32:35.818 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:35 vm00 bash[17468]: audit 2026-03-09T18:32:35.659614+0000 mon.a (mon.0) 865 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:35.818 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:35 vm00 bash[17468]: audit 2026-03-09T18:32:35.671089+0000 mon.a (mon.0) 866 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:35.818 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:35 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:32:35.818 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:35 vm00 systemd[1]: Started Ceph mgr.y for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:32:35.818 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:35 vm00 bash[53976]: debug 2026-03-09T18:32:35.815+0000 7ff6c6b55140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T18:32:35.818 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:35 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:32:35.818 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:35 vm00 bash[22468]: audit 2026-03-09T18:32:34.679654+0000 mon.a (mon.0) 859 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:35.818 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:35 vm00 bash[22468]: audit 2026-03-09T18:32:34.689218+0000 mon.a (mon.0) 860 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:35.818 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:35 vm00 bash[22468]: cluster 2026-03-09T18:32:34.706367+0000 mgr.x (mgr.24751) 160 : cluster [DBG] pgmap v104: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:35.818 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:35 vm00 bash[22468]: audit 2026-03-09T18:32:34.713785+0000 mon.a (mon.0) 861 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:35.818 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:35 vm00 bash[22468]: audit 2026-03-09T18:32:34.720500+0000 mon.b (mon.2) 104 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:32:35.818 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:35 vm00 bash[22468]: audit 2026-03-09T18:32:34.721397+0000 mon.b (mon.2) 105 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:32:35.818 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:35 vm00 bash[22468]: audit 2026-03-09T18:32:34.722635+0000 mon.a (mon.0) 862 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:35.818 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:35 vm00 bash[22468]: audit 2026-03-09T18:32:34.731404+0000 mon.a (mon.0) 863 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:35.818 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:35 vm00 bash[22468]: audit 2026-03-09T18:32:34.742820+0000 mon.b (mon.2) 106 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:32:35.818 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:35 vm00 bash[22468]: audit 2026-03-09T18:32:34.744313+0000 mon.b (mon.2) 107 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:32:35.818 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:35 vm00 bash[22468]: audit 2026-03-09T18:32:34.745154+0000 mon.b (mon.2) 108 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:32:35.818 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:35 vm00 bash[22468]: cephadm 2026-03-09T18:32:34.745862+0000 mgr.x (mgr.24751) 161 : cephadm [INF] Deploying daemon mgr.y on vm00 2026-03-09T18:32:35.818 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:35 vm00 bash[22468]: audit 2026-03-09T18:32:34.747067+0000 mon.a (mon.0) 864 : audit [INF] from='mgr.24751 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:32:35.818 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:35 vm00 bash[22468]: audit 2026-03-09T18:32:35.659614+0000 mon.a (mon.0) 865 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:35.818 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:35 vm00 bash[22468]: audit 2026-03-09T18:32:35.671089+0000 mon.a (mon.0) 866 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:35.818 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:32:35 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:32:35.818 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:32:35 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:32:35.818 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:32:35 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:32:35.818 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:32:35 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:32:35.818 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:32:35 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:32:35.819 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:32:35 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:32:35.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:35 vm08 bash[17774]: audit 2026-03-09T18:32:34.679654+0000 mon.a (mon.0) 859 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:35.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:35 vm08 bash[17774]: audit 2026-03-09T18:32:34.689218+0000 mon.a (mon.0) 860 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:35.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:35 vm08 bash[17774]: cluster 2026-03-09T18:32:34.706367+0000 mgr.x (mgr.24751) 160 : cluster [DBG] pgmap v104: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:35 vm08 bash[17774]: audit 2026-03-09T18:32:34.713785+0000 mon.a (mon.0) 861 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:35 vm08 bash[17774]: audit 2026-03-09T18:32:34.720500+0000 mon.b (mon.2) 104 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:32:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:35 vm08 bash[17774]: audit 2026-03-09T18:32:34.721397+0000 mon.b (mon.2) 105 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:32:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:35 vm08 bash[17774]: audit 2026-03-09T18:32:34.722635+0000 mon.a (mon.0) 862 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:35 vm08 bash[17774]: audit 2026-03-09T18:32:34.731404+0000 mon.a (mon.0) 863 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:35 vm08 bash[17774]: audit 2026-03-09T18:32:34.742820+0000 mon.b (mon.2) 106 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:32:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:35 vm08 bash[17774]: audit 2026-03-09T18:32:34.744313+0000 mon.b (mon.2) 107 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:32:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:35 vm08 bash[17774]: audit 2026-03-09T18:32:34.745154+0000 mon.b (mon.2) 108 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:32:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:35 vm08 bash[17774]: cephadm 2026-03-09T18:32:34.745862+0000 mgr.x (mgr.24751) 161 : cephadm [INF] Deploying daemon mgr.y on vm00 2026-03-09T18:32:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:35 vm08 bash[17774]: audit 2026-03-09T18:32:34.747067+0000 mon.a (mon.0) 864 : audit [INF] from='mgr.24751 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:32:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:35 vm08 bash[17774]: audit 2026-03-09T18:32:35.659614+0000 mon.a (mon.0) 865 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:35.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:35 vm08 bash[17774]: audit 2026-03-09T18:32:35.671089+0000 mon.a (mon.0) 866 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:36.128 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:35 vm00 bash[53976]: debug 2026-03-09T18:32:35.851+0000 7ff6c6b55140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T18:32:36.128 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:35 vm00 bash[53976]: debug 2026-03-09T18:32:35.963+0000 7ff6c6b55140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T18:32:36.628 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:36 vm00 bash[53976]: debug 2026-03-09T18:32:36.239+0000 7ff6c6b55140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T18:32:36.948 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:36 vm08 bash[17774]: audit 2026-03-09T18:32:35.682302+0000 mon.a (mon.0) 867 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:36.948 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:36 vm08 bash[17774]: audit 2026-03-09T18:32:35.704442+0000 mon.a (mon.0) 868 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:36.948 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:36 vm08 bash[17774]: audit 2026-03-09T18:32:35.730364+0000 mon.b (mon.2) 109 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:32:37.033 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:36 vm00 bash[22468]: audit 2026-03-09T18:32:35.682302+0000 mon.a (mon.0) 867 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:37.033 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:36 vm00 bash[22468]: audit 2026-03-09T18:32:35.704442+0000 mon.a (mon.0) 868 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:37.033 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:36 vm00 bash[22468]: audit 2026-03-09T18:32:35.730364+0000 mon.b (mon.2) 109 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:32:37.033 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:36 vm00 bash[17468]: audit 2026-03-09T18:32:35.682302+0000 mon.a (mon.0) 867 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:37.033 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:36 vm00 bash[17468]: audit 2026-03-09T18:32:35.704442+0000 mon.a (mon.0) 868 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:37.033 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:36 vm00 bash[17468]: audit 2026-03-09T18:32:35.730364+0000 mon.b (mon.2) 109 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:32:37.033 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:36 vm00 bash[53976]: debug 2026-03-09T18:32:36.671+0000 7ff6c6b55140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T18:32:37.033 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:36 vm00 bash[53976]: debug 2026-03-09T18:32:36.759+0000 7ff6c6b55140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T18:32:37.033 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:36 vm00 bash[53976]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T18:32:37.033 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:36 vm00 bash[53976]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T18:32:37.034 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:36 vm00 bash[53976]: from numpy import show_config as show_numpy_config 2026-03-09T18:32:37.034 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:36 vm00 bash[53976]: debug 2026-03-09T18:32:36.887+0000 7ff6c6b55140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T18:32:37.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:32:36 vm08 bash[38540]: ts=2026-03-09T18:32:36.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm00\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:32:37.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:37 vm00 bash[53976]: debug 2026-03-09T18:32:37.035+0000 7ff6c6b55140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T18:32:37.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:37 vm00 bash[53976]: debug 2026-03-09T18:32:37.079+0000 7ff6c6b55140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T18:32:37.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:37 vm00 bash[53976]: debug 2026-03-09T18:32:37.119+0000 7ff6c6b55140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T18:32:37.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:37 vm00 bash[53976]: debug 2026-03-09T18:32:37.163+0000 7ff6c6b55140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T18:32:37.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:37 vm00 bash[53976]: debug 2026-03-09T18:32:37.211+0000 7ff6c6b55140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T18:32:37.902 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:37 vm00 bash[22468]: cluster 2026-03-09T18:32:36.706663+0000 mgr.x (mgr.24751) 162 : cluster [DBG] pgmap v105: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:37.902 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:37 vm00 bash[17468]: cluster 2026-03-09T18:32:36.706663+0000 mgr.x (mgr.24751) 162 : cluster [DBG] pgmap v105: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:37.902 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:37 vm00 bash[53976]: debug 2026-03-09T18:32:37.643+0000 7ff6c6b55140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T18:32:37.902 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:37 vm00 bash[53976]: debug 2026-03-09T18:32:37.683+0000 7ff6c6b55140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T18:32:37.902 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:37 vm00 bash[53976]: debug 2026-03-09T18:32:37.719+0000 7ff6c6b55140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T18:32:37.902 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:37 vm00 bash[53976]: debug 2026-03-09T18:32:37.863+0000 7ff6c6b55140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T18:32:38.202 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:37 vm00 bash[53976]: debug 2026-03-09T18:32:37.903+0000 7ff6c6b55140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T18:32:38.203 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:37 vm00 bash[53976]: debug 2026-03-09T18:32:37.939+0000 7ff6c6b55140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T18:32:38.203 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:38 vm00 bash[53976]: debug 2026-03-09T18:32:38.051+0000 7ff6c6b55140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:32:38.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:37 vm08 bash[17774]: cluster 2026-03-09T18:32:36.706663+0000 mgr.x (mgr.24751) 162 : cluster [DBG] pgmap v105: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:38.600 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:38 vm00 bash[53976]: debug 2026-03-09T18:32:38.203+0000 7ff6c6b55140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T18:32:38.600 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:38 vm00 bash[53976]: debug 2026-03-09T18:32:38.375+0000 7ff6c6b55140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T18:32:38.600 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:38 vm00 bash[53976]: debug 2026-03-09T18:32:38.411+0000 7ff6c6b55140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T18:32:38.600 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:38 vm00 bash[53976]: debug 2026-03-09T18:32:38.451+0000 7ff6c6b55140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T18:32:38.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:38 vm00 bash[53976]: debug 2026-03-09T18:32:38.599+0000 7ff6c6b55140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:32:38.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:38 vm00 bash[53976]: debug 2026-03-09T18:32:38.823+0000 7ff6c6b55140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T18:32:38.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:38 vm00 bash[53976]: [09/Mar/2026:18:32:38] ENGINE Bus STARTING 2026-03-09T18:32:38.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:38 vm00 bash[53976]: CherryPy Checker: 2026-03-09T18:32:38.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:38 vm00 bash[53976]: The Application mounted at '' has an empty config. 2026-03-09T18:32:39.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:38 vm00 bash[53976]: [09/Mar/2026:18:32:38] ENGINE Serving on http://:::9283 2026-03-09T18:32:39.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:32:38 vm00 bash[53976]: [09/Mar/2026:18:32:38] ENGINE Bus STARTED 2026-03-09T18:32:40.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:39 vm00 bash[22468]: cluster 2026-03-09T18:32:38.707182+0000 mgr.x (mgr.24751) 163 : cluster [DBG] pgmap v106: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:40.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:39 vm00 bash[22468]: cluster 2026-03-09T18:32:38.830449+0000 mon.a (mon.0) 869 : cluster [DBG] Standby manager daemon y restarted 2026-03-09T18:32:40.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:39 vm00 bash[22468]: cluster 2026-03-09T18:32:38.830532+0000 mon.a (mon.0) 870 : cluster [DBG] Standby manager daemon y started 2026-03-09T18:32:40.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:39 vm00 bash[22468]: audit 2026-03-09T18:32:38.831846+0000 mon.c (mon.1) 136 : audit [DBG] from='mgr.? 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-09T18:32:40.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:39 vm00 bash[22468]: audit 2026-03-09T18:32:38.832135+0000 mon.c (mon.1) 137 : audit [DBG] from='mgr.? 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:32:40.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:39 vm00 bash[22468]: audit 2026-03-09T18:32:38.832959+0000 mon.c (mon.1) 138 : audit [DBG] from='mgr.? 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-09T18:32:40.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:39 vm00 bash[22468]: audit 2026-03-09T18:32:38.833314+0000 mon.c (mon.1) 139 : audit [DBG] from='mgr.? 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:32:40.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:39 vm00 bash[17468]: cluster 2026-03-09T18:32:38.707182+0000 mgr.x (mgr.24751) 163 : cluster [DBG] pgmap v106: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:40.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:39 vm00 bash[17468]: cluster 2026-03-09T18:32:38.830449+0000 mon.a (mon.0) 869 : cluster [DBG] Standby manager daemon y restarted 2026-03-09T18:32:40.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:39 vm00 bash[17468]: cluster 2026-03-09T18:32:38.830532+0000 mon.a (mon.0) 870 : cluster [DBG] Standby manager daemon y started 2026-03-09T18:32:40.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:39 vm00 bash[17468]: audit 2026-03-09T18:32:38.831846+0000 mon.c (mon.1) 136 : audit [DBG] from='mgr.? 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-09T18:32:40.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:39 vm00 bash[17468]: audit 2026-03-09T18:32:38.832135+0000 mon.c (mon.1) 137 : audit [DBG] from='mgr.? 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:32:40.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:39 vm00 bash[17468]: audit 2026-03-09T18:32:38.832959+0000 mon.c (mon.1) 138 : audit [DBG] from='mgr.? 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-09T18:32:40.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:39 vm00 bash[17468]: audit 2026-03-09T18:32:38.833314+0000 mon.c (mon.1) 139 : audit [DBG] from='mgr.? 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:32:40.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:39 vm08 bash[17774]: cluster 2026-03-09T18:32:38.707182+0000 mgr.x (mgr.24751) 163 : cluster [DBG] pgmap v106: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:40.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:39 vm08 bash[17774]: cluster 2026-03-09T18:32:38.830449+0000 mon.a (mon.0) 869 : cluster [DBG] Standby manager daemon y restarted 2026-03-09T18:32:40.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:39 vm08 bash[17774]: cluster 2026-03-09T18:32:38.830532+0000 mon.a (mon.0) 870 : cluster [DBG] Standby manager daemon y started 2026-03-09T18:32:40.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:39 vm08 bash[17774]: audit 2026-03-09T18:32:38.831846+0000 mon.c (mon.1) 136 : audit [DBG] from='mgr.? 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-09T18:32:40.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:39 vm08 bash[17774]: audit 2026-03-09T18:32:38.832135+0000 mon.c (mon.1) 137 : audit [DBG] from='mgr.? 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:32:40.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:39 vm08 bash[17774]: audit 2026-03-09T18:32:38.832959+0000 mon.c (mon.1) 138 : audit [DBG] from='mgr.? 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-09T18:32:40.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:39 vm08 bash[17774]: audit 2026-03-09T18:32:38.833314+0000 mon.c (mon.1) 139 : audit [DBG] from='mgr.? 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:32:41.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:40 vm00 bash[17468]: cluster 2026-03-09T18:32:39.768133+0000 mon.a (mon.0) 871 : cluster [DBG] mgrmap e27: x(active, since 3m), standbys: y 2026-03-09T18:32:41.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:40 vm00 bash[22468]: cluster 2026-03-09T18:32:39.768133+0000 mon.a (mon.0) 871 : cluster [DBG] mgrmap e27: x(active, since 3m), standbys: y 2026-03-09T18:32:41.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:40 vm08 bash[17774]: cluster 2026-03-09T18:32:39.768133+0000 mon.a (mon.0) 871 : cluster [DBG] mgrmap e27: x(active, since 3m), standbys: y 2026-03-09T18:32:42.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:42 vm00 bash[22468]: cluster 2026-03-09T18:32:40.707548+0000 mgr.x (mgr.24751) 164 : cluster [DBG] pgmap v107: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:42.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:42 vm00 bash[22468]: audit 2026-03-09T18:32:41.099131+0000 mon.a (mon.0) 872 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:42.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:42 vm00 bash[22468]: audit 2026-03-09T18:32:41.104440+0000 mon.b (mon.2) 110 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:32:42.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:42 vm00 bash[22468]: audit 2026-03-09T18:32:41.105162+0000 mon.b (mon.2) 111 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:32:42.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:42 vm00 bash[22468]: audit 2026-03-09T18:32:41.106297+0000 mon.a (mon.0) 873 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:42.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:42 vm00 bash[22468]: audit 2026-03-09T18:32:41.114425+0000 mon.a (mon.0) 874 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:42.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:42 vm00 bash[22468]: audit 2026-03-09T18:32:41.280842+0000 mgr.x (mgr.24751) 165 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:42.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:42 vm00 bash[17468]: cluster 2026-03-09T18:32:40.707548+0000 mgr.x (mgr.24751) 164 : cluster [DBG] pgmap v107: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:42.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:42 vm00 bash[17468]: audit 2026-03-09T18:32:41.099131+0000 mon.a (mon.0) 872 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:42.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:42 vm00 bash[17468]: audit 2026-03-09T18:32:41.104440+0000 mon.b (mon.2) 110 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:32:42.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:42 vm00 bash[17468]: audit 2026-03-09T18:32:41.105162+0000 mon.b (mon.2) 111 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:32:42.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:42 vm00 bash[17468]: audit 2026-03-09T18:32:41.106297+0000 mon.a (mon.0) 873 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:42.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:42 vm00 bash[17468]: audit 2026-03-09T18:32:41.114425+0000 mon.a (mon.0) 874 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:42.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:42 vm00 bash[17468]: audit 2026-03-09T18:32:41.280842+0000 mgr.x (mgr.24751) 165 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:42.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:42 vm08 bash[17774]: cluster 2026-03-09T18:32:40.707548+0000 mgr.x (mgr.24751) 164 : cluster [DBG] pgmap v107: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:42.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:42 vm08 bash[17774]: audit 2026-03-09T18:32:41.099131+0000 mon.a (mon.0) 872 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:42.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:42 vm08 bash[17774]: audit 2026-03-09T18:32:41.104440+0000 mon.b (mon.2) 110 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:32:42.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:42 vm08 bash[17774]: audit 2026-03-09T18:32:41.105162+0000 mon.b (mon.2) 111 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:32:42.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:42 vm08 bash[17774]: audit 2026-03-09T18:32:41.106297+0000 mon.a (mon.0) 873 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:42.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:42 vm08 bash[17774]: audit 2026-03-09T18:32:41.114425+0000 mon.a (mon.0) 874 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:32:42.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:42 vm08 bash[17774]: audit 2026-03-09T18:32:41.280842+0000 mgr.x (mgr.24751) 165 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:43.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:43 vm00 bash[17468]: audit 2026-03-09T18:32:42.102883+0000 mon.b (mon.2) 112 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:32:43.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:43 vm00 bash[22468]: audit 2026-03-09T18:32:42.102883+0000 mon.b (mon.2) 112 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:32:43.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:43 vm08 bash[17774]: audit 2026-03-09T18:32:42.102883+0000 mon.b (mon.2) 112 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:32:43.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:32:43 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:32:43] "GET /metrics HTTP/1.1" 200 37530 "" "Prometheus/2.51.0" 2026-03-09T18:32:44.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:44 vm00 bash[17468]: cluster 2026-03-09T18:32:42.707889+0000 mgr.x (mgr.24751) 166 : cluster [DBG] pgmap v108: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:44.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:44 vm00 bash[22468]: cluster 2026-03-09T18:32:42.707889+0000 mgr.x (mgr.24751) 166 : cluster [DBG] pgmap v108: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:44.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:44 vm08 bash[17774]: cluster 2026-03-09T18:32:42.707889+0000 mgr.x (mgr.24751) 166 : cluster [DBG] pgmap v108: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:44.475 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:32:44 vm08 bash[38540]: ts=2026-03-09T18:32:44.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:32:46.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:46 vm00 bash[17468]: cluster 2026-03-09T18:32:44.708400+0000 mgr.x (mgr.24751) 167 : cluster [DBG] pgmap v109: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:46.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:46 vm00 bash[22468]: cluster 2026-03-09T18:32:44.708400+0000 mgr.x (mgr.24751) 167 : cluster [DBG] pgmap v109: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:46.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:46 vm08 bash[17774]: cluster 2026-03-09T18:32:44.708400+0000 mgr.x (mgr.24751) 167 : cluster [DBG] pgmap v109: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:47.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:32:46 vm08 bash[38540]: ts=2026-03-09T18:32:46.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm00\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:32:48.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:48 vm08 bash[17774]: cluster 2026-03-09T18:32:46.708758+0000 mgr.x (mgr.24751) 168 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:48.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:48 vm00 bash[17468]: cluster 2026-03-09T18:32:46.708758+0000 mgr.x (mgr.24751) 168 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:48.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:48 vm00 bash[22468]: cluster 2026-03-09T18:32:46.708758+0000 mgr.x (mgr.24751) 168 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:50.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:50 vm08 bash[17774]: cluster 2026-03-09T18:32:48.709406+0000 mgr.x (mgr.24751) 169 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:50.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:50 vm00 bash[17468]: cluster 2026-03-09T18:32:48.709406+0000 mgr.x (mgr.24751) 169 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:50.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:50 vm00 bash[22468]: cluster 2026-03-09T18:32:48.709406+0000 mgr.x (mgr.24751) 169 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:52.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:52 vm08 bash[17774]: cluster 2026-03-09T18:32:50.709755+0000 mgr.x (mgr.24751) 170 : cluster [DBG] pgmap v112: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:52.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:52 vm08 bash[17774]: audit 2026-03-09T18:32:51.291532+0000 mgr.x (mgr.24751) 171 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:52.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:52 vm00 bash[22468]: cluster 2026-03-09T18:32:50.709755+0000 mgr.x (mgr.24751) 170 : cluster [DBG] pgmap v112: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:52.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:52 vm00 bash[22468]: audit 2026-03-09T18:32:51.291532+0000 mgr.x (mgr.24751) 171 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:52.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:52 vm00 bash[17468]: cluster 2026-03-09T18:32:50.709755+0000 mgr.x (mgr.24751) 170 : cluster [DBG] pgmap v112: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:52.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:52 vm00 bash[17468]: audit 2026-03-09T18:32:51.291532+0000 mgr.x (mgr.24751) 171 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:32:53.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:32:53 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:32:53] "GET /metrics HTTP/1.1" 200 37530 "" "Prometheus/2.51.0" 2026-03-09T18:32:54.470 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:32:54 vm08 bash[38540]: ts=2026-03-09T18:32:54.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:32:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:54 vm08 bash[17774]: cluster 2026-03-09T18:32:52.710103+0000 mgr.x (mgr.24751) 172 : cluster [DBG] pgmap v113: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:54.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:54 vm00 bash[17468]: cluster 2026-03-09T18:32:52.710103+0000 mgr.x (mgr.24751) 172 : cluster [DBG] pgmap v113: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:54.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:54 vm00 bash[22468]: cluster 2026-03-09T18:32:52.710103+0000 mgr.x (mgr.24751) 172 : cluster [DBG] pgmap v113: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:56.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:56 vm08 bash[17774]: cluster 2026-03-09T18:32:54.710643+0000 mgr.x (mgr.24751) 173 : cluster [DBG] pgmap v114: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:56.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:56 vm00 bash[17468]: cluster 2026-03-09T18:32:54.710643+0000 mgr.x (mgr.24751) 173 : cluster [DBG] pgmap v114: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:56.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:56 vm00 bash[22468]: cluster 2026-03-09T18:32:54.710643+0000 mgr.x (mgr.24751) 173 : cluster [DBG] pgmap v114: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:32:57.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:32:56 vm08 bash[38540]: ts=2026-03-09T18:32:56.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm00\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:32:57.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:57 vm00 bash[22468]: audit 2026-03-09T18:32:57.103241+0000 mon.b (mon.2) 113 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:32:57.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:57 vm00 bash[17468]: audit 2026-03-09T18:32:57.103241+0000 mon.b (mon.2) 113 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:32:57.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:57 vm08 bash[17774]: audit 2026-03-09T18:32:57.103241+0000 mon.b (mon.2) 113 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:32:58.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:32:58 vm00 bash[22468]: cluster 2026-03-09T18:32:56.710988+0000 mgr.x (mgr.24751) 174 : cluster [DBG] pgmap v115: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:58.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:32:58 vm00 bash[17468]: cluster 2026-03-09T18:32:56.710988+0000 mgr.x (mgr.24751) 174 : cluster [DBG] pgmap v115: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:32:58.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:32:58 vm08 bash[17774]: cluster 2026-03-09T18:32:56.710988+0000 mgr.x (mgr.24751) 174 : cluster [DBG] pgmap v115: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:00.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:00 vm00 bash[22468]: cluster 2026-03-09T18:32:58.711565+0000 mgr.x (mgr.24751) 175 : cluster [DBG] pgmap v116: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:00.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:00 vm00 bash[17468]: cluster 2026-03-09T18:32:58.711565+0000 mgr.x (mgr.24751) 175 : cluster [DBG] pgmap v116: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:00.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:00 vm08 bash[17774]: cluster 2026-03-09T18:32:58.711565+0000 mgr.x (mgr.24751) 175 : cluster [DBG] pgmap v116: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:02 vm08 bash[17774]: cluster 2026-03-09T18:33:00.711898+0000 mgr.x (mgr.24751) 176 : cluster [DBG] pgmap v117: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:02 vm08 bash[17774]: audit 2026-03-09T18:33:01.302248+0000 mgr.x (mgr.24751) 177 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:02.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:02 vm00 bash[22468]: cluster 2026-03-09T18:33:00.711898+0000 mgr.x (mgr.24751) 176 : cluster [DBG] pgmap v117: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:02.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:02 vm00 bash[22468]: audit 2026-03-09T18:33:01.302248+0000 mgr.x (mgr.24751) 177 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:02.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:02 vm00 bash[17468]: cluster 2026-03-09T18:33:00.711898+0000 mgr.x (mgr.24751) 176 : cluster [DBG] pgmap v117: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:02.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:02 vm00 bash[17468]: audit 2026-03-09T18:33:01.302248+0000 mgr.x (mgr.24751) 177 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:03.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:33:03 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:33:03] "GET /metrics HTTP/1.1" 200 37531 "" "Prometheus/2.51.0" 2026-03-09T18:33:04.463 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:33:04 vm08 bash[38540]: ts=2026-03-09T18:33:04.148Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:33:04.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:04 vm08 bash[17774]: cluster 2026-03-09T18:33:02.712258+0000 mgr.x (mgr.24751) 178 : cluster [DBG] pgmap v118: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:04.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:04 vm00 bash[22468]: cluster 2026-03-09T18:33:02.712258+0000 mgr.x (mgr.24751) 178 : cluster [DBG] pgmap v118: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:04.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:04 vm00 bash[17468]: cluster 2026-03-09T18:33:02.712258+0000 mgr.x (mgr.24751) 178 : cluster [DBG] pgmap v118: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:06.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:06 vm00 bash[22468]: cluster 2026-03-09T18:33:04.712763+0000 mgr.x (mgr.24751) 179 : cluster [DBG] pgmap v119: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:06.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:06 vm00 bash[17468]: cluster 2026-03-09T18:33:04.712763+0000 mgr.x (mgr.24751) 179 : cluster [DBG] pgmap v119: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:06.948 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:06 vm08 bash[17774]: cluster 2026-03-09T18:33:04.712763+0000 mgr.x (mgr.24751) 179 : cluster [DBG] pgmap v119: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:07.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:33:06 vm08 bash[38540]: ts=2026-03-09T18:33:06.949Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm00\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:33:07.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:07 vm00 bash[22468]: cluster 2026-03-09T18:33:06.713052+0000 mgr.x (mgr.24751) 180 : cluster [DBG] pgmap v120: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:07.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:07 vm00 bash[17468]: cluster 2026-03-09T18:33:06.713052+0000 mgr.x (mgr.24751) 180 : cluster [DBG] pgmap v120: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:07.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:07 vm08 bash[17774]: cluster 2026-03-09T18:33:06.713052+0000 mgr.x (mgr.24751) 180 : cluster [DBG] pgmap v120: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:10.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:09 vm00 bash[22468]: cluster 2026-03-09T18:33:08.713538+0000 mgr.x (mgr.24751) 181 : cluster [DBG] pgmap v121: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:10.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:09 vm00 bash[17468]: cluster 2026-03-09T18:33:08.713538+0000 mgr.x (mgr.24751) 181 : cluster [DBG] pgmap v121: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:10.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:09 vm08 bash[17774]: cluster 2026-03-09T18:33:08.713538+0000 mgr.x (mgr.24751) 181 : cluster [DBG] pgmap v121: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:12 vm08 bash[17774]: cluster 2026-03-09T18:33:10.713822+0000 mgr.x (mgr.24751) 182 : cluster [DBG] pgmap v122: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:12 vm08 bash[17774]: audit 2026-03-09T18:33:11.307395+0000 mgr.x (mgr.24751) 183 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:12 vm08 bash[17774]: audit 2026-03-09T18:33:12.103442+0000 mon.b (mon.2) 114 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:33:12.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:12 vm00 bash[22468]: cluster 2026-03-09T18:33:10.713822+0000 mgr.x (mgr.24751) 182 : cluster [DBG] pgmap v122: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:12.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:12 vm00 bash[22468]: audit 2026-03-09T18:33:11.307395+0000 mgr.x (mgr.24751) 183 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:12.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:12 vm00 bash[22468]: audit 2026-03-09T18:33:12.103442+0000 mon.b (mon.2) 114 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:33:12.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:12 vm00 bash[17468]: cluster 2026-03-09T18:33:10.713822+0000 mgr.x (mgr.24751) 182 : cluster [DBG] pgmap v122: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:12.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:12 vm00 bash[17468]: audit 2026-03-09T18:33:11.307395+0000 mgr.x (mgr.24751) 183 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:12.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:12 vm00 bash[17468]: audit 2026-03-09T18:33:12.103442+0000 mon.b (mon.2) 114 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:33:13.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:33:13 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:33:13] "GET /metrics HTTP/1.1" 200 37531 "" "Prometheus/2.51.0" 2026-03-09T18:33:14.466 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:33:14 vm08 bash[38540]: ts=2026-03-09T18:33:14.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:33:14.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:14 vm08 bash[17774]: cluster 2026-03-09T18:33:12.714152+0000 mgr.x (mgr.24751) 184 : cluster [DBG] pgmap v123: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:14.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:14 vm00 bash[22468]: cluster 2026-03-09T18:33:12.714152+0000 mgr.x (mgr.24751) 184 : cluster [DBG] pgmap v123: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:14.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:14 vm00 bash[17468]: cluster 2026-03-09T18:33:12.714152+0000 mgr.x (mgr.24751) 184 : cluster [DBG] pgmap v123: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:16.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:16 vm08 bash[17774]: cluster 2026-03-09T18:33:14.714562+0000 mgr.x (mgr.24751) 185 : cluster [DBG] pgmap v124: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:16.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:16 vm00 bash[22468]: cluster 2026-03-09T18:33:14.714562+0000 mgr.x (mgr.24751) 185 : cluster [DBG] pgmap v124: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:16.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:16 vm00 bash[17468]: cluster 2026-03-09T18:33:14.714562+0000 mgr.x (mgr.24751) 185 : cluster [DBG] pgmap v124: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:17.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:33:16 vm08 bash[38540]: ts=2026-03-09T18:33:16.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm00\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:33:18.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:18 vm00 bash[22468]: cluster 2026-03-09T18:33:16.714854+0000 mgr.x (mgr.24751) 186 : cluster [DBG] pgmap v125: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:18.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:18 vm00 bash[17468]: cluster 2026-03-09T18:33:16.714854+0000 mgr.x (mgr.24751) 186 : cluster [DBG] pgmap v125: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:18.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:18 vm08 bash[17774]: cluster 2026-03-09T18:33:16.714854+0000 mgr.x (mgr.24751) 186 : cluster [DBG] pgmap v125: 161 pgs: 161 active+clean; 457 KiB data, 74 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:20.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:20 vm00 bash[22468]: cluster 2026-03-09T18:33:18.715372+0000 mgr.x (mgr.24751) 187 : cluster [DBG] pgmap v126: 161 pgs: 161 active+clean; 457 KiB data, 79 MiB used, 160 GiB / 160 GiB avail; 47 KiB/s rd, 0 B/s wr, 76 op/s 2026-03-09T18:33:20.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:20 vm00 bash[17468]: cluster 2026-03-09T18:33:18.715372+0000 mgr.x (mgr.24751) 187 : cluster [DBG] pgmap v126: 161 pgs: 161 active+clean; 457 KiB data, 79 MiB used, 160 GiB / 160 GiB avail; 47 KiB/s rd, 0 B/s wr, 76 op/s 2026-03-09T18:33:20.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:20 vm08 bash[17774]: cluster 2026-03-09T18:33:18.715372+0000 mgr.x (mgr.24751) 187 : cluster [DBG] pgmap v126: 161 pgs: 161 active+clean; 457 KiB data, 79 MiB used, 160 GiB / 160 GiB avail; 47 KiB/s rd, 0 B/s wr, 76 op/s 2026-03-09T18:33:22.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:22 vm00 bash[22468]: cluster 2026-03-09T18:33:20.715796+0000 mgr.x (mgr.24751) 188 : cluster [DBG] pgmap v127: 161 pgs: 161 active+clean; 457 KiB data, 79 MiB used, 160 GiB / 160 GiB avail; 46 KiB/s rd, 0 B/s wr, 76 op/s 2026-03-09T18:33:22.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:22 vm00 bash[22468]: audit 2026-03-09T18:33:21.318091+0000 mgr.x (mgr.24751) 189 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:22.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:22 vm00 bash[17468]: cluster 2026-03-09T18:33:20.715796+0000 mgr.x (mgr.24751) 188 : cluster [DBG] pgmap v127: 161 pgs: 161 active+clean; 457 KiB data, 79 MiB used, 160 GiB / 160 GiB avail; 46 KiB/s rd, 0 B/s wr, 76 op/s 2026-03-09T18:33:22.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:22 vm00 bash[17468]: audit 2026-03-09T18:33:21.318091+0000 mgr.x (mgr.24751) 189 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:22.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:22 vm08 bash[17774]: cluster 2026-03-09T18:33:20.715796+0000 mgr.x (mgr.24751) 188 : cluster [DBG] pgmap v127: 161 pgs: 161 active+clean; 457 KiB data, 79 MiB used, 160 GiB / 160 GiB avail; 46 KiB/s rd, 0 B/s wr, 76 op/s 2026-03-09T18:33:22.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:22 vm08 bash[17774]: audit 2026-03-09T18:33:21.318091+0000 mgr.x (mgr.24751) 189 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:23.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:23 vm08 bash[17774]: cluster 2026-03-09T18:33:22.716191+0000 mgr.x (mgr.24751) 190 : cluster [DBG] pgmap v128: 161 pgs: 161 active+clean; 457 KiB data, 83 MiB used, 160 GiB / 160 GiB avail; 54 KiB/s rd, 0 B/s wr, 89 op/s 2026-03-09T18:33:23.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:33:23 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:33:23] "GET /metrics HTTP/1.1" 200 37531 "" "Prometheus/2.51.0" 2026-03-09T18:33:23.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:23 vm00 bash[22468]: cluster 2026-03-09T18:33:22.716191+0000 mgr.x (mgr.24751) 190 : cluster [DBG] pgmap v128: 161 pgs: 161 active+clean; 457 KiB data, 83 MiB used, 160 GiB / 160 GiB avail; 54 KiB/s rd, 0 B/s wr, 89 op/s 2026-03-09T18:33:23.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:23 vm00 bash[17468]: cluster 2026-03-09T18:33:22.716191+0000 mgr.x (mgr.24751) 190 : cluster [DBG] pgmap v128: 161 pgs: 161 active+clean; 457 KiB data, 83 MiB used, 160 GiB / 160 GiB avail; 54 KiB/s rd, 0 B/s wr, 89 op/s 2026-03-09T18:33:24.474 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:33:24 vm08 bash[38540]: ts=2026-03-09T18:33:24.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:33:26.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:25 vm00 bash[22468]: cluster 2026-03-09T18:33:24.716639+0000 mgr.x (mgr.24751) 191 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-09T18:33:26.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:25 vm00 bash[17468]: cluster 2026-03-09T18:33:24.716639+0000 mgr.x (mgr.24751) 191 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-09T18:33:26.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:25 vm08 bash[17774]: cluster 2026-03-09T18:33:24.716639+0000 mgr.x (mgr.24751) 191 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-09T18:33:27.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:33:26 vm08 bash[38540]: ts=2026-03-09T18:33:26.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm00\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:33:27.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:27 vm08 bash[17774]: audit 2026-03-09T18:33:27.103599+0000 mon.b (mon.2) 115 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:33:27.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:27 vm00 bash[22468]: audit 2026-03-09T18:33:27.103599+0000 mon.b (mon.2) 115 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:33:27.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:27 vm00 bash[17468]: audit 2026-03-09T18:33:27.103599+0000 mon.b (mon.2) 115 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:33:28.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:28 vm08 bash[17774]: cluster 2026-03-09T18:33:26.716949+0000 mgr.x (mgr.24751) 192 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s 2026-03-09T18:33:28.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:28 vm00 bash[22468]: cluster 2026-03-09T18:33:26.716949+0000 mgr.x (mgr.24751) 192 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s 2026-03-09T18:33:28.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:28 vm00 bash[17468]: cluster 2026-03-09T18:33:26.716949+0000 mgr.x (mgr.24751) 192 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s 2026-03-09T18:33:30.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:30 vm00 bash[22468]: cluster 2026-03-09T18:33:28.717433+0000 mgr.x (mgr.24751) 193 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-09T18:33:30.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:30 vm00 bash[17468]: cluster 2026-03-09T18:33:28.717433+0000 mgr.x (mgr.24751) 193 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-09T18:33:30.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:30 vm08 bash[17774]: cluster 2026-03-09T18:33:28.717433+0000 mgr.x (mgr.24751) 193 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-09T18:33:32.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:32 vm00 bash[17468]: cluster 2026-03-09T18:33:30.717743+0000 mgr.x (mgr.24751) 194 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 27 KiB/s rd, 0 B/s wr, 44 op/s 2026-03-09T18:33:32.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:32 vm00 bash[17468]: audit 2026-03-09T18:33:31.328881+0000 mgr.x (mgr.24751) 195 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:32.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:32 vm00 bash[22468]: cluster 2026-03-09T18:33:30.717743+0000 mgr.x (mgr.24751) 194 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 27 KiB/s rd, 0 B/s wr, 44 op/s 2026-03-09T18:33:32.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:32 vm00 bash[22468]: audit 2026-03-09T18:33:31.328881+0000 mgr.x (mgr.24751) 195 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:32.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:32 vm08 bash[17774]: cluster 2026-03-09T18:33:30.717743+0000 mgr.x (mgr.24751) 194 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 27 KiB/s rd, 0 B/s wr, 44 op/s 2026-03-09T18:33:32.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:32 vm08 bash[17774]: audit 2026-03-09T18:33:31.328881+0000 mgr.x (mgr.24751) 195 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:33.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:33:33 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:33:33] "GET /metrics HTTP/1.1" 200 37538 "" "Prometheus/2.51.0" 2026-03-09T18:33:34.474 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:33:34 vm08 bash[38540]: ts=2026-03-09T18:33:34.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:33:34.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:34 vm00 bash[22468]: cluster 2026-03-09T18:33:32.718105+0000 mgr.x (mgr.24751) 196 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 27 KiB/s rd, 0 B/s wr, 44 op/s 2026-03-09T18:33:34.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:34 vm00 bash[17468]: cluster 2026-03-09T18:33:32.718105+0000 mgr.x (mgr.24751) 196 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 27 KiB/s rd, 0 B/s wr, 44 op/s 2026-03-09T18:33:34.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:34 vm08 bash[17774]: cluster 2026-03-09T18:33:32.718105+0000 mgr.x (mgr.24751) 196 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 27 KiB/s rd, 0 B/s wr, 44 op/s 2026-03-09T18:33:35.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:35 vm00 bash[22468]: cluster 2026-03-09T18:33:34.718571+0000 mgr.x (mgr.24751) 197 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 19 KiB/s rd, 0 B/s wr, 31 op/s 2026-03-09T18:33:35.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:35 vm00 bash[17468]: cluster 2026-03-09T18:33:34.718571+0000 mgr.x (mgr.24751) 197 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 19 KiB/s rd, 0 B/s wr, 31 op/s 2026-03-09T18:33:35.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:35 vm08 bash[17774]: cluster 2026-03-09T18:33:34.718571+0000 mgr.x (mgr.24751) 197 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 19 KiB/s rd, 0 B/s wr, 31 op/s 2026-03-09T18:33:37.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:33:36 vm08 bash[38540]: ts=2026-03-09T18:33:36.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm00\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:33:37.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:37 vm00 bash[22468]: cluster 2026-03-09T18:33:36.718886+0000 mgr.x (mgr.24751) 198 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:37.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:37 vm00 bash[17468]: cluster 2026-03-09T18:33:36.718886+0000 mgr.x (mgr.24751) 198 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:37.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:37 vm08 bash[17774]: cluster 2026-03-09T18:33:36.718886+0000 mgr.x (mgr.24751) 198 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:40.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:39 vm00 bash[22468]: cluster 2026-03-09T18:33:38.719375+0000 mgr.x (mgr.24751) 199 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:40.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:39 vm00 bash[17468]: cluster 2026-03-09T18:33:38.719375+0000 mgr.x (mgr.24751) 199 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:40.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:39 vm08 bash[17774]: cluster 2026-03-09T18:33:38.719375+0000 mgr.x (mgr.24751) 199 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:42.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:41 vm00 bash[22468]: cluster 2026-03-09T18:33:40.719700+0000 mgr.x (mgr.24751) 200 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:42.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:41 vm00 bash[22468]: audit 2026-03-09T18:33:41.150367+0000 mon.b (mon.2) 116 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:33:42.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:41 vm00 bash[22468]: audit 2026-03-09T18:33:41.337562+0000 mgr.x (mgr.24751) 201 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:42.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:41 vm00 bash[22468]: audit 2026-03-09T18:33:41.444737+0000 mon.b (mon.2) 117 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:33:42.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:41 vm00 bash[22468]: audit 2026-03-09T18:33:41.445476+0000 mon.b (mon.2) 118 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:33:42.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:41 vm00 bash[22468]: audit 2026-03-09T18:33:41.462003+0000 mon.a (mon.0) 875 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:33:42.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:41 vm00 bash[17468]: cluster 2026-03-09T18:33:40.719700+0000 mgr.x (mgr.24751) 200 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:42.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:41 vm00 bash[17468]: audit 2026-03-09T18:33:41.150367+0000 mon.b (mon.2) 116 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:33:42.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:41 vm00 bash[17468]: audit 2026-03-09T18:33:41.337562+0000 mgr.x (mgr.24751) 201 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:42.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:41 vm00 bash[17468]: audit 2026-03-09T18:33:41.444737+0000 mon.b (mon.2) 117 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:33:42.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:41 vm00 bash[17468]: audit 2026-03-09T18:33:41.445476+0000 mon.b (mon.2) 118 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:33:42.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:41 vm00 bash[17468]: audit 2026-03-09T18:33:41.462003+0000 mon.a (mon.0) 875 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:33:42.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:41 vm08 bash[17774]: cluster 2026-03-09T18:33:40.719700+0000 mgr.x (mgr.24751) 200 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:42.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:41 vm08 bash[17774]: audit 2026-03-09T18:33:41.150367+0000 mon.b (mon.2) 116 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:33:42.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:41 vm08 bash[17774]: audit 2026-03-09T18:33:41.337562+0000 mgr.x (mgr.24751) 201 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:42.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:41 vm08 bash[17774]: audit 2026-03-09T18:33:41.444737+0000 mon.b (mon.2) 117 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:33:42.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:41 vm08 bash[17774]: audit 2026-03-09T18:33:41.445476+0000 mon.b (mon.2) 118 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:33:42.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:41 vm08 bash[17774]: audit 2026-03-09T18:33:41.462003+0000 mon.a (mon.0) 875 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:33:43.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:42 vm00 bash[22468]: audit 2026-03-09T18:33:42.103767+0000 mon.b (mon.2) 119 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:33:43.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:42 vm00 bash[17468]: audit 2026-03-09T18:33:42.103767+0000 mon.b (mon.2) 119 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:33:43.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:42 vm08 bash[17774]: audit 2026-03-09T18:33:42.103767+0000 mon.b (mon.2) 119 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:33:43.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:33:43 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:33:43] "GET /metrics HTTP/1.1" 200 37537 "" "Prometheus/2.51.0" 2026-03-09T18:33:44.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:43 vm00 bash[22468]: cluster 2026-03-09T18:33:42.720098+0000 mgr.x (mgr.24751) 202 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:44.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:43 vm00 bash[17468]: cluster 2026-03-09T18:33:42.720098+0000 mgr.x (mgr.24751) 202 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:44.147 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:43 vm08 bash[17774]: cluster 2026-03-09T18:33:42.720098+0000 mgr.x (mgr.24751) 202 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:44.474 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:33:44 vm08 bash[38540]: ts=2026-03-09T18:33:44.148Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:33:46.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:45 vm00 bash[22468]: cluster 2026-03-09T18:33:44.720436+0000 mgr.x (mgr.24751) 203 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:46.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:45 vm00 bash[17468]: cluster 2026-03-09T18:33:44.720436+0000 mgr.x (mgr.24751) 203 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:46.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:45 vm08 bash[17774]: cluster 2026-03-09T18:33:44.720436+0000 mgr.x (mgr.24751) 203 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:47.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:33:46 vm08 bash[38540]: ts=2026-03-09T18:33:46.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm00\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:33:48.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:48 vm08 bash[17774]: cluster 2026-03-09T18:33:46.720739+0000 mgr.x (mgr.24751) 204 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:48.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:48 vm00 bash[22468]: cluster 2026-03-09T18:33:46.720739+0000 mgr.x (mgr.24751) 204 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:48.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:48 vm00 bash[17468]: cluster 2026-03-09T18:33:46.720739+0000 mgr.x (mgr.24751) 204 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:50.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:50 vm00 bash[22468]: cluster 2026-03-09T18:33:48.721246+0000 mgr.x (mgr.24751) 205 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:50.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:50 vm00 bash[17468]: cluster 2026-03-09T18:33:48.721246+0000 mgr.x (mgr.24751) 205 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:50.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:50 vm08 bash[17774]: cluster 2026-03-09T18:33:48.721246+0000 mgr.x (mgr.24751) 205 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:52.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:52 vm00 bash[17468]: cluster 2026-03-09T18:33:50.721541+0000 mgr.x (mgr.24751) 206 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:52.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:52 vm00 bash[17468]: audit 2026-03-09T18:33:51.346332+0000 mgr.x (mgr.24751) 207 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:52.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:52 vm00 bash[22468]: cluster 2026-03-09T18:33:50.721541+0000 mgr.x (mgr.24751) 206 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:52.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:52 vm00 bash[22468]: audit 2026-03-09T18:33:51.346332+0000 mgr.x (mgr.24751) 207 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:52.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:52 vm08 bash[17774]: cluster 2026-03-09T18:33:50.721541+0000 mgr.x (mgr.24751) 206 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:52.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:52 vm08 bash[17774]: audit 2026-03-09T18:33:51.346332+0000 mgr.x (mgr.24751) 207 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:33:53.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:33:53 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:33:53] "GET /metrics HTTP/1.1" 200 37537 "" "Prometheus/2.51.0" 2026-03-09T18:33:54.474 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:33:54 vm08 bash[38540]: ts=2026-03-09T18:33:54.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:33:54.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:54 vm00 bash[17468]: cluster 2026-03-09T18:33:52.721905+0000 mgr.x (mgr.24751) 208 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:54.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:54 vm00 bash[22468]: cluster 2026-03-09T18:33:52.721905+0000 mgr.x (mgr.24751) 208 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:54.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:54 vm08 bash[17774]: cluster 2026-03-09T18:33:52.721905+0000 mgr.x (mgr.24751) 208 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:33:56.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:56 vm00 bash[17468]: cluster 2026-03-09T18:33:54.722290+0000 mgr.x (mgr.24751) 209 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:56.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:56 vm00 bash[22468]: cluster 2026-03-09T18:33:54.722290+0000 mgr.x (mgr.24751) 209 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:56.947 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:56 vm08 bash[17774]: cluster 2026-03-09T18:33:54.722290+0000 mgr.x (mgr.24751) 209 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:57.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:33:56 vm08 bash[38540]: ts=2026-03-09T18:33:56.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm00\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:33:57.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:57 vm00 bash[17468]: cluster 2026-03-09T18:33:56.722592+0000 mgr.x (mgr.24751) 210 : cluster [DBG] pgmap v145: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:57.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:57 vm00 bash[17468]: audit 2026-03-09T18:33:57.104019+0000 mon.b (mon.2) 120 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:33:57.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:57 vm00 bash[22468]: cluster 2026-03-09T18:33:56.722592+0000 mgr.x (mgr.24751) 210 : cluster [DBG] pgmap v145: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:57.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:57 vm00 bash[22468]: audit 2026-03-09T18:33:57.104019+0000 mon.b (mon.2) 120 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:33:57.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:57 vm08 bash[17774]: cluster 2026-03-09T18:33:56.722592+0000 mgr.x (mgr.24751) 210 : cluster [DBG] pgmap v145: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:33:57.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:57 vm08 bash[17774]: audit 2026-03-09T18:33:57.104019+0000 mon.b (mon.2) 120 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:34:00.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:33:59 vm00 bash[17468]: cluster 2026-03-09T18:33:58.723102+0000 mgr.x (mgr.24751) 211 : cluster [DBG] pgmap v146: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:00.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:33:59 vm00 bash[22468]: cluster 2026-03-09T18:33:58.723102+0000 mgr.x (mgr.24751) 211 : cluster [DBG] pgmap v146: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:00.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:33:59 vm08 bash[17774]: cluster 2026-03-09T18:33:58.723102+0000 mgr.x (mgr.24751) 211 : cluster [DBG] pgmap v146: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:02 vm08 bash[17774]: cluster 2026-03-09T18:34:00.723409+0000 mgr.x (mgr.24751) 212 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:02 vm08 bash[17774]: audit 2026-03-09T18:34:01.349485+0000 mgr.x (mgr.24751) 213 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:02.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:02 vm00 bash[22468]: cluster 2026-03-09T18:34:00.723409+0000 mgr.x (mgr.24751) 212 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:02.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:02 vm00 bash[22468]: audit 2026-03-09T18:34:01.349485+0000 mgr.x (mgr.24751) 213 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:02.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:02 vm00 bash[17468]: cluster 2026-03-09T18:34:00.723409+0000 mgr.x (mgr.24751) 212 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:02.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:02 vm00 bash[17468]: audit 2026-03-09T18:34:01.349485+0000 mgr.x (mgr.24751) 213 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:03.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:34:03 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:34:03] "GET /metrics HTTP/1.1" 200 37534 "" "Prometheus/2.51.0" 2026-03-09T18:34:04.474 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:34:04 vm08 bash[38540]: ts=2026-03-09T18:34:04.148Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:34:04.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:04 vm00 bash[17468]: cluster 2026-03-09T18:34:02.723830+0000 mgr.x (mgr.24751) 214 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:04.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:04 vm00 bash[22468]: cluster 2026-03-09T18:34:02.723830+0000 mgr.x (mgr.24751) 214 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:04.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:04 vm08 bash[17774]: cluster 2026-03-09T18:34:02.723830+0000 mgr.x (mgr.24751) 214 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:05.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:05 vm00 bash[17468]: cluster 2026-03-09T18:34:04.724253+0000 mgr.x (mgr.24751) 215 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:05.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:05 vm00 bash[22468]: cluster 2026-03-09T18:34:04.724253+0000 mgr.x (mgr.24751) 215 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:05.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:05 vm08 bash[17774]: cluster 2026-03-09T18:34:04.724253+0000 mgr.x (mgr.24751) 215 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:07.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:34:06 vm08 bash[38540]: ts=2026-03-09T18:34:06.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm00\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:34:08.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:08 vm00 bash[22468]: cluster 2026-03-09T18:34:06.724524+0000 mgr.x (mgr.24751) 216 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:08.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:08 vm00 bash[17468]: cluster 2026-03-09T18:34:06.724524+0000 mgr.x (mgr.24751) 216 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:08.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:08 vm08 bash[17774]: cluster 2026-03-09T18:34:06.724524+0000 mgr.x (mgr.24751) 216 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:10.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:10 vm00 bash[22468]: cluster 2026-03-09T18:34:08.724976+0000 mgr.x (mgr.24751) 217 : cluster [DBG] pgmap v151: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:10.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:10 vm00 bash[17468]: cluster 2026-03-09T18:34:08.724976+0000 mgr.x (mgr.24751) 217 : cluster [DBG] pgmap v151: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:10.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:10 vm08 bash[17774]: cluster 2026-03-09T18:34:08.724976+0000 mgr.x (mgr.24751) 217 : cluster [DBG] pgmap v151: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:12.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:12 vm00 bash[22468]: cluster 2026-03-09T18:34:10.725206+0000 mgr.x (mgr.24751) 218 : cluster [DBG] pgmap v152: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:12.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:12 vm00 bash[22468]: audit 2026-03-09T18:34:11.353181+0000 mgr.x (mgr.24751) 219 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:12.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:12 vm00 bash[22468]: audit 2026-03-09T18:34:12.104105+0000 mon.b (mon.2) 121 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:34:12.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:12 vm00 bash[17468]: cluster 2026-03-09T18:34:10.725206+0000 mgr.x (mgr.24751) 218 : cluster [DBG] pgmap v152: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:12.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:12 vm00 bash[17468]: audit 2026-03-09T18:34:11.353181+0000 mgr.x (mgr.24751) 219 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:12.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:12 vm00 bash[17468]: audit 2026-03-09T18:34:12.104105+0000 mon.b (mon.2) 121 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:34:12.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:12 vm08 bash[17774]: cluster 2026-03-09T18:34:10.725206+0000 mgr.x (mgr.24751) 218 : cluster [DBG] pgmap v152: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:12.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:12 vm08 bash[17774]: audit 2026-03-09T18:34:11.353181+0000 mgr.x (mgr.24751) 219 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:12.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:12 vm08 bash[17774]: audit 2026-03-09T18:34:12.104105+0000 mon.b (mon.2) 121 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:34:13.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:34:13 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:34:13] "GET /metrics HTTP/1.1" 200 37535 "" "Prometheus/2.51.0" 2026-03-09T18:34:14.474 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:34:14 vm08 bash[38540]: ts=2026-03-09T18:34:14.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:34:14.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:14 vm00 bash[22468]: cluster 2026-03-09T18:34:12.725586+0000 mgr.x (mgr.24751) 220 : cluster [DBG] pgmap v153: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:14.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:14 vm00 bash[17468]: cluster 2026-03-09T18:34:12.725586+0000 mgr.x (mgr.24751) 220 : cluster [DBG] pgmap v153: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:14.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:14 vm08 bash[17774]: cluster 2026-03-09T18:34:12.725586+0000 mgr.x (mgr.24751) 220 : cluster [DBG] pgmap v153: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:15.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:15 vm00 bash[22468]: cluster 2026-03-09T18:34:14.725971+0000 mgr.x (mgr.24751) 221 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:15.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:15 vm00 bash[17468]: cluster 2026-03-09T18:34:14.725971+0000 mgr.x (mgr.24751) 221 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:15.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:15 vm08 bash[17774]: cluster 2026-03-09T18:34:14.725971+0000 mgr.x (mgr.24751) 221 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:17.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:34:16 vm08 bash[38540]: ts=2026-03-09T18:34:16.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm00\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:34:18.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:18 vm00 bash[22468]: cluster 2026-03-09T18:34:16.726291+0000 mgr.x (mgr.24751) 222 : cluster [DBG] pgmap v155: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:18.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:18 vm00 bash[17468]: cluster 2026-03-09T18:34:16.726291+0000 mgr.x (mgr.24751) 222 : cluster [DBG] pgmap v155: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:18.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:18 vm08 bash[17774]: cluster 2026-03-09T18:34:16.726291+0000 mgr.x (mgr.24751) 222 : cluster [DBG] pgmap v155: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:20.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:20 vm00 bash[22468]: cluster 2026-03-09T18:34:18.726828+0000 mgr.x (mgr.24751) 223 : cluster [DBG] pgmap v156: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:20.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:20 vm00 bash[17468]: cluster 2026-03-09T18:34:18.726828+0000 mgr.x (mgr.24751) 223 : cluster [DBG] pgmap v156: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:20.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:20 vm08 bash[17774]: cluster 2026-03-09T18:34:18.726828+0000 mgr.x (mgr.24751) 223 : cluster [DBG] pgmap v156: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:22.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:22 vm00 bash[22468]: cluster 2026-03-09T18:34:20.727127+0000 mgr.x (mgr.24751) 224 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:22.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:22 vm00 bash[22468]: audit 2026-03-09T18:34:21.363712+0000 mgr.x (mgr.24751) 225 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:22.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:22 vm00 bash[17468]: cluster 2026-03-09T18:34:20.727127+0000 mgr.x (mgr.24751) 224 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:22.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:22 vm00 bash[17468]: audit 2026-03-09T18:34:21.363712+0000 mgr.x (mgr.24751) 225 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:22.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:22 vm08 bash[17774]: cluster 2026-03-09T18:34:20.727127+0000 mgr.x (mgr.24751) 224 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:22.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:22 vm08 bash[17774]: audit 2026-03-09T18:34:21.363712+0000 mgr.x (mgr.24751) 225 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:23.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:34:23 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:34:23] "GET /metrics HTTP/1.1" 200 37535 "" "Prometheus/2.51.0" 2026-03-09T18:34:24.474 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:34:24 vm08 bash[38540]: ts=2026-03-09T18:34:24.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:34:24.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:24 vm00 bash[22468]: cluster 2026-03-09T18:34:22.727531+0000 mgr.x (mgr.24751) 226 : cluster [DBG] pgmap v158: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:24.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:24 vm00 bash[17468]: cluster 2026-03-09T18:34:22.727531+0000 mgr.x (mgr.24751) 226 : cluster [DBG] pgmap v158: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:24.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:24 vm08 bash[17774]: cluster 2026-03-09T18:34:22.727531+0000 mgr.x (mgr.24751) 226 : cluster [DBG] pgmap v158: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:25.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:25 vm00 bash[22468]: cluster 2026-03-09T18:34:24.727928+0000 mgr.x (mgr.24751) 227 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:25.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:25 vm00 bash[17468]: cluster 2026-03-09T18:34:24.727928+0000 mgr.x (mgr.24751) 227 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:25.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:25 vm08 bash[17774]: cluster 2026-03-09T18:34:24.727928+0000 mgr.x (mgr.24751) 227 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:27.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:34:26 vm08 bash[38540]: ts=2026-03-09T18:34:26.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm00\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:34:27.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:27 vm00 bash[22468]: audit 2026-03-09T18:34:27.104206+0000 mon.b (mon.2) 122 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:34:27.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:27 vm00 bash[17468]: audit 2026-03-09T18:34:27.104206+0000 mon.b (mon.2) 122 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:34:27.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:27 vm08 bash[17774]: audit 2026-03-09T18:34:27.104206+0000 mon.b (mon.2) 122 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:34:28.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:28 vm00 bash[22468]: cluster 2026-03-09T18:34:26.728216+0000 mgr.x (mgr.24751) 228 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:28.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:28 vm00 bash[17468]: cluster 2026-03-09T18:34:26.728216+0000 mgr.x (mgr.24751) 228 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:28.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:28 vm08 bash[17774]: cluster 2026-03-09T18:34:26.728216+0000 mgr.x (mgr.24751) 228 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:29.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:29 vm00 bash[22468]: cluster 2026-03-09T18:34:28.728810+0000 mgr.x (mgr.24751) 229 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:29.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:29 vm00 bash[17468]: cluster 2026-03-09T18:34:28.728810+0000 mgr.x (mgr.24751) 229 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:29.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:29 vm08 bash[17774]: cluster 2026-03-09T18:34:28.728810+0000 mgr.x (mgr.24751) 229 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:32.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:32 vm00 bash[22468]: cluster 2026-03-09T18:34:30.729067+0000 mgr.x (mgr.24751) 230 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:32.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:32 vm00 bash[22468]: audit 2026-03-09T18:34:31.365405+0000 mgr.x (mgr.24751) 231 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:32.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:32 vm00 bash[17468]: cluster 2026-03-09T18:34:30.729067+0000 mgr.x (mgr.24751) 230 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:32.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:32 vm00 bash[17468]: audit 2026-03-09T18:34:31.365405+0000 mgr.x (mgr.24751) 231 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:32.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:32 vm08 bash[17774]: cluster 2026-03-09T18:34:30.729067+0000 mgr.x (mgr.24751) 230 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:32.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:32 vm08 bash[17774]: audit 2026-03-09T18:34:31.365405+0000 mgr.x (mgr.24751) 231 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:33.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:34:33 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:34:33] "GET /metrics HTTP/1.1" 200 37538 "" "Prometheus/2.51.0" 2026-03-09T18:34:34.474 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:34:34 vm08 bash[38540]: ts=2026-03-09T18:34:34.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:34:34.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:34 vm00 bash[22468]: cluster 2026-03-09T18:34:32.729461+0000 mgr.x (mgr.24751) 232 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:34.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:34 vm00 bash[17468]: cluster 2026-03-09T18:34:32.729461+0000 mgr.x (mgr.24751) 232 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:34.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:34 vm08 bash[17774]: cluster 2026-03-09T18:34:32.729461+0000 mgr.x (mgr.24751) 232 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:36.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:36 vm00 bash[22468]: cluster 2026-03-09T18:34:34.729709+0000 mgr.x (mgr.24751) 233 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:36.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:36 vm00 bash[17468]: cluster 2026-03-09T18:34:34.729709+0000 mgr.x (mgr.24751) 233 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:36.948 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:36 vm08 bash[17774]: cluster 2026-03-09T18:34:34.729709+0000 mgr.x (mgr.24751) 233 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:37.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:34:36 vm08 bash[38540]: ts=2026-03-09T18:34:36.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm00\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm00\", job=\"node\", machine=\"x86_64\", nodename=\"vm00\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:34:38.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:38 vm00 bash[22468]: cluster 2026-03-09T18:34:36.729946+0000 mgr.x (mgr.24751) 234 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:38.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:38 vm00 bash[17468]: cluster 2026-03-09T18:34:36.729946+0000 mgr.x (mgr.24751) 234 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:38.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:38 vm08 bash[17774]: cluster 2026-03-09T18:34:36.729946+0000 mgr.x (mgr.24751) 234 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:39.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:39 vm00 bash[22468]: cluster 2026-03-09T18:34:38.730427+0000 mgr.x (mgr.24751) 235 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:39.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:39 vm00 bash[17468]: cluster 2026-03-09T18:34:38.730427+0000 mgr.x (mgr.24751) 235 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:39.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:39 vm08 bash[17774]: cluster 2026-03-09T18:34:38.730427+0000 mgr.x (mgr.24751) 235 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:42.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:42 vm00 bash[22468]: cluster 2026-03-09T18:34:40.730760+0000 mgr.x (mgr.24751) 236 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:42.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:42 vm00 bash[22468]: audit 2026-03-09T18:34:41.367585+0000 mgr.x (mgr.24751) 237 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:42.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:42 vm00 bash[22468]: audit 2026-03-09T18:34:41.500599+0000 mon.b (mon.2) 123 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:34:42.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:42 vm00 bash[22468]: audit 2026-03-09T18:34:41.804395+0000 mon.b (mon.2) 124 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:34:42.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:42 vm00 bash[22468]: audit 2026-03-09T18:34:41.804918+0000 mon.b (mon.2) 125 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:34:42.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:42 vm00 bash[22468]: audit 2026-03-09T18:34:41.819781+0000 mon.a (mon.0) 876 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:34:42.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:42 vm00 bash[22468]: audit 2026-03-09T18:34:42.104510+0000 mon.b (mon.2) 126 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:34:42.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:42 vm00 bash[17468]: cluster 2026-03-09T18:34:40.730760+0000 mgr.x (mgr.24751) 236 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:42.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:42 vm00 bash[17468]: audit 2026-03-09T18:34:41.367585+0000 mgr.x (mgr.24751) 237 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:42.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:42 vm00 bash[17468]: audit 2026-03-09T18:34:41.500599+0000 mon.b (mon.2) 123 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:34:42.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:42 vm00 bash[17468]: audit 2026-03-09T18:34:41.804395+0000 mon.b (mon.2) 124 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:34:42.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:42 vm00 bash[17468]: audit 2026-03-09T18:34:41.804918+0000 mon.b (mon.2) 125 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:34:42.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:42 vm00 bash[17468]: audit 2026-03-09T18:34:41.819781+0000 mon.a (mon.0) 876 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:34:42.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:42 vm00 bash[17468]: audit 2026-03-09T18:34:42.104510+0000 mon.b (mon.2) 126 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:34:42.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:42 vm08 bash[17774]: cluster 2026-03-09T18:34:40.730760+0000 mgr.x (mgr.24751) 236 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:42.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:42 vm08 bash[17774]: audit 2026-03-09T18:34:41.367585+0000 mgr.x (mgr.24751) 237 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:42.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:42 vm08 bash[17774]: audit 2026-03-09T18:34:41.500599+0000 mon.b (mon.2) 123 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:34:42.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:42 vm08 bash[17774]: audit 2026-03-09T18:34:41.804395+0000 mon.b (mon.2) 124 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:34:42.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:42 vm08 bash[17774]: audit 2026-03-09T18:34:41.804918+0000 mon.b (mon.2) 125 : audit [INF] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:34:42.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:42 vm08 bash[17774]: audit 2026-03-09T18:34:41.819781+0000 mon.a (mon.0) 876 : audit [INF] from='mgr.24751 ' entity='mgr.x' 2026-03-09T18:34:42.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:42 vm08 bash[17774]: audit 2026-03-09T18:34:42.104510+0000 mon.b (mon.2) 126 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:34:43.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:34:43 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:34:43] "GET /metrics HTTP/1.1" 200 37536 "" "Prometheus/2.51.0" 2026-03-09T18:34:44.474 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:34:44 vm08 bash[38540]: ts=2026-03-09T18:34:44.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"614f4990-1be4-11f1-8b84-dfd1edd9d965\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.100\", device_class=\"hdd\", hostname=\"vm00\", instance=\"192.168.123.108:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.100\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T18:34:44.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:44 vm00 bash[22468]: cluster 2026-03-09T18:34:42.731468+0000 mgr.x (mgr.24751) 238 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:44.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:44 vm00 bash[17468]: cluster 2026-03-09T18:34:42.731468+0000 mgr.x (mgr.24751) 238 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:44.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:44 vm08 bash[17774]: cluster 2026-03-09T18:34:42.731468+0000 mgr.x (mgr.24751) 238 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:46.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:46 vm00 bash[22468]: cluster 2026-03-09T18:34:44.731768+0000 mgr.x (mgr.24751) 239 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:46.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:46 vm00 bash[17468]: cluster 2026-03-09T18:34:44.731768+0000 mgr.x (mgr.24751) 239 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:46.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:46 vm08 bash[17774]: cluster 2026-03-09T18:34:44.731768+0000 mgr.x (mgr.24751) 239 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:47.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:47 vm00 bash[22468]: cluster 2026-03-09T18:34:46.732054+0000 mgr.x (mgr.24751) 240 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:47.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:47 vm00 bash[17468]: cluster 2026-03-09T18:34:46.732054+0000 mgr.x (mgr.24751) 240 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:47.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:47 vm08 bash[17774]: cluster 2026-03-09T18:34:46.732054+0000 mgr.x (mgr.24751) 240 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:50.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:49 vm00 bash[22468]: cluster 2026-03-09T18:34:48.732511+0000 mgr.x (mgr.24751) 241 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:50.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:49 vm00 bash[17468]: cluster 2026-03-09T18:34:48.732511+0000 mgr.x (mgr.24751) 241 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:50.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:49 vm08 bash[17774]: cluster 2026-03-09T18:34:48.732511+0000 mgr.x (mgr.24751) 241 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:52.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:52 vm00 bash[22468]: cluster 2026-03-09T18:34:50.732861+0000 mgr.x (mgr.24751) 242 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:52.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:52 vm00 bash[22468]: audit 2026-03-09T18:34:51.371277+0000 mgr.x (mgr.24751) 243 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:52.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:52 vm00 bash[17468]: cluster 2026-03-09T18:34:50.732861+0000 mgr.x (mgr.24751) 242 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:52.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:52 vm00 bash[17468]: audit 2026-03-09T18:34:51.371277+0000 mgr.x (mgr.24751) 243 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:52.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:52 vm08 bash[17774]: cluster 2026-03-09T18:34:50.732861+0000 mgr.x (mgr.24751) 242 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:52.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:52 vm08 bash[17774]: audit 2026-03-09T18:34:51.371277+0000 mgr.x (mgr.24751) 243 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:34:53.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:34:53 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:34:53] "GET /metrics HTTP/1.1" 200 37536 "" "Prometheus/2.51.0" 2026-03-09T18:34:54.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:54 vm00 bash[22468]: cluster 2026-03-09T18:34:52.733265+0000 mgr.x (mgr.24751) 244 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:54.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:54 vm00 bash[17468]: cluster 2026-03-09T18:34:52.733265+0000 mgr.x (mgr.24751) 244 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:54.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:54 vm08 bash[17774]: cluster 2026-03-09T18:34:52.733265+0000 mgr.x (mgr.24751) 244 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:56.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:56 vm00 bash[22468]: cluster 2026-03-09T18:34:54.733523+0000 mgr.x (mgr.24751) 245 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:56.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:56 vm00 bash[17468]: cluster 2026-03-09T18:34:54.733523+0000 mgr.x (mgr.24751) 245 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:56.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:56 vm08 bash[17774]: cluster 2026-03-09T18:34:54.733523+0000 mgr.x (mgr.24751) 245 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:57.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:57 vm00 bash[22468]: audit 2026-03-09T18:34:57.104633+0000 mon.b (mon.2) 127 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:34:57.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:57 vm00 bash[17468]: audit 2026-03-09T18:34:57.104633+0000 mon.b (mon.2) 127 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:34:57.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:57 vm08 bash[17774]: audit 2026-03-09T18:34:57.104633+0000 mon.b (mon.2) 127 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:34:58.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:58 vm00 bash[22468]: cluster 2026-03-09T18:34:56.733768+0000 mgr.x (mgr.24751) 246 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:58.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:58 vm00 bash[17468]: cluster 2026-03-09T18:34:56.733768+0000 mgr.x (mgr.24751) 246 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:58.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:58 vm08 bash[17774]: cluster 2026-03-09T18:34:56.733768+0000 mgr.x (mgr.24751) 246 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:34:59.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:34:59 vm00 bash[22468]: cluster 2026-03-09T18:34:58.734250+0000 mgr.x (mgr.24751) 247 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:59.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:34:59 vm00 bash[17468]: cluster 2026-03-09T18:34:58.734250+0000 mgr.x (mgr.24751) 247 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:34:59.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:34:59 vm08 bash[17774]: cluster 2026-03-09T18:34:58.734250+0000 mgr.x (mgr.24751) 247 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:02.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:02 vm00 bash[22468]: cluster 2026-03-09T18:35:00.734575+0000 mgr.x (mgr.24751) 248 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:02.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:02 vm00 bash[22468]: audit 2026-03-09T18:35:01.376297+0000 mgr.x (mgr.24751) 249 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:02.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:02 vm00 bash[17468]: cluster 2026-03-09T18:35:00.734575+0000 mgr.x (mgr.24751) 248 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:02.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:02 vm00 bash[17468]: audit 2026-03-09T18:35:01.376297+0000 mgr.x (mgr.24751) 249 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:02.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:02 vm08 bash[17774]: cluster 2026-03-09T18:35:00.734575+0000 mgr.x (mgr.24751) 248 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:02.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:02 vm08 bash[17774]: audit 2026-03-09T18:35:01.376297+0000 mgr.x (mgr.24751) 249 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:03.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:03 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:35:03] "GET /metrics HTTP/1.1" 200 37533 "" "Prometheus/2.51.0" 2026-03-09T18:35:04.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:04 vm00 bash[22468]: cluster 2026-03-09T18:35:02.735060+0000 mgr.x (mgr.24751) 250 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:04.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:04 vm00 bash[17468]: cluster 2026-03-09T18:35:02.735060+0000 mgr.x (mgr.24751) 250 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:04.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:04 vm08 bash[17774]: cluster 2026-03-09T18:35:02.735060+0000 mgr.x (mgr.24751) 250 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:06.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:06 vm00 bash[22468]: cluster 2026-03-09T18:35:04.735396+0000 mgr.x (mgr.24751) 251 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:06.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:06 vm00 bash[17468]: cluster 2026-03-09T18:35:04.735396+0000 mgr.x (mgr.24751) 251 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:06.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:06 vm08 bash[17774]: cluster 2026-03-09T18:35:04.735396+0000 mgr.x (mgr.24751) 251 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:07.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:07 vm00 bash[22468]: cluster 2026-03-09T18:35:06.735703+0000 mgr.x (mgr.24751) 252 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:07.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:07 vm00 bash[17468]: cluster 2026-03-09T18:35:06.735703+0000 mgr.x (mgr.24751) 252 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:07.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:07 vm08 bash[17774]: cluster 2026-03-09T18:35:06.735703+0000 mgr.x (mgr.24751) 252 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:10.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:09 vm00 bash[22468]: cluster 2026-03-09T18:35:08.736387+0000 mgr.x (mgr.24751) 253 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:10.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:09 vm00 bash[17468]: cluster 2026-03-09T18:35:08.736387+0000 mgr.x (mgr.24751) 253 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:10.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:09 vm08 bash[17774]: cluster 2026-03-09T18:35:08.736387+0000 mgr.x (mgr.24751) 253 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:12.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:12 vm00 bash[22468]: cluster 2026-03-09T18:35:10.736657+0000 mgr.x (mgr.24751) 254 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:12.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:12 vm00 bash[22468]: audit 2026-03-09T18:35:11.386999+0000 mgr.x (mgr.24751) 255 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:12.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:12 vm00 bash[22468]: audit 2026-03-09T18:35:12.104828+0000 mon.b (mon.2) 128 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:35:12.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:12 vm00 bash[17468]: cluster 2026-03-09T18:35:10.736657+0000 mgr.x (mgr.24751) 254 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:12.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:12 vm00 bash[17468]: audit 2026-03-09T18:35:11.386999+0000 mgr.x (mgr.24751) 255 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:12.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:12 vm00 bash[17468]: audit 2026-03-09T18:35:12.104828+0000 mon.b (mon.2) 128 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:35:12.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:12 vm08 bash[17774]: cluster 2026-03-09T18:35:10.736657+0000 mgr.x (mgr.24751) 254 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:12.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:12 vm08 bash[17774]: audit 2026-03-09T18:35:11.386999+0000 mgr.x (mgr.24751) 255 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:12.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:12 vm08 bash[17774]: audit 2026-03-09T18:35:12.104828+0000 mon.b (mon.2) 128 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:35:13.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:13 vm08 bash[17774]: cluster 2026-03-09T18:35:12.737033+0000 mgr.x (mgr.24751) 256 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:13.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:13 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:35:13] "GET /metrics HTTP/1.1" 200 37537 "" "Prometheus/2.51.0" 2026-03-09T18:35:13.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:13 vm00 bash[22468]: cluster 2026-03-09T18:35:12.737033+0000 mgr.x (mgr.24751) 256 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:13.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:13 vm00 bash[17468]: cluster 2026-03-09T18:35:12.737033+0000 mgr.x (mgr.24751) 256 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:16.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:15 vm00 bash[22468]: cluster 2026-03-09T18:35:14.737365+0000 mgr.x (mgr.24751) 257 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:16.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:15 vm00 bash[17468]: cluster 2026-03-09T18:35:14.737365+0000 mgr.x (mgr.24751) 257 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:16.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:15 vm08 bash[17774]: cluster 2026-03-09T18:35:14.737365+0000 mgr.x (mgr.24751) 257 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:18.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:18 vm00 bash[22468]: cluster 2026-03-09T18:35:16.737693+0000 mgr.x (mgr.24751) 258 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:18.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:18 vm00 bash[17468]: cluster 2026-03-09T18:35:16.737693+0000 mgr.x (mgr.24751) 258 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:18.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:18 vm08 bash[17774]: cluster 2026-03-09T18:35:16.737693+0000 mgr.x (mgr.24751) 258 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:19.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:19 vm00 bash[22468]: cluster 2026-03-09T18:35:18.738185+0000 mgr.x (mgr.24751) 259 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:19.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:19 vm00 bash[17468]: cluster 2026-03-09T18:35:18.738185+0000 mgr.x (mgr.24751) 259 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:19.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:19 vm08 bash[17774]: cluster 2026-03-09T18:35:18.738185+0000 mgr.x (mgr.24751) 259 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:22.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:22 vm00 bash[22468]: cluster 2026-03-09T18:35:20.738480+0000 mgr.x (mgr.24751) 260 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:22.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:22 vm00 bash[22468]: audit 2026-03-09T18:35:21.389546+0000 mgr.x (mgr.24751) 261 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:22.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:22 vm00 bash[17468]: cluster 2026-03-09T18:35:20.738480+0000 mgr.x (mgr.24751) 260 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:22.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:22 vm00 bash[17468]: audit 2026-03-09T18:35:21.389546+0000 mgr.x (mgr.24751) 261 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:22.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:22 vm08 bash[17774]: cluster 2026-03-09T18:35:20.738480+0000 mgr.x (mgr.24751) 260 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:22.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:22 vm08 bash[17774]: audit 2026-03-09T18:35:21.389546+0000 mgr.x (mgr.24751) 261 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:23.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:23 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:35:23] "GET /metrics HTTP/1.1" 200 37537 "" "Prometheus/2.51.0" 2026-03-09T18:35:24.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:24 vm00 bash[22468]: cluster 2026-03-09T18:35:22.738972+0000 mgr.x (mgr.24751) 262 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:24.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:24 vm00 bash[17468]: cluster 2026-03-09T18:35:22.738972+0000 mgr.x (mgr.24751) 262 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:24.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:24 vm08 bash[17774]: cluster 2026-03-09T18:35:22.738972+0000 mgr.x (mgr.24751) 262 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:26.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:26 vm00 bash[22468]: cluster 2026-03-09T18:35:24.739274+0000 mgr.x (mgr.24751) 263 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:26.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:26 vm00 bash[17468]: cluster 2026-03-09T18:35:24.739274+0000 mgr.x (mgr.24751) 263 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:26.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:26 vm08 bash[17774]: cluster 2026-03-09T18:35:24.739274+0000 mgr.x (mgr.24751) 263 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:27.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:27 vm00 bash[22468]: cluster 2026-03-09T18:35:26.739567+0000 mgr.x (mgr.24751) 264 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:27.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:27 vm00 bash[22468]: audit 2026-03-09T18:35:27.104922+0000 mon.b (mon.2) 129 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:35:27.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:27 vm00 bash[17468]: cluster 2026-03-09T18:35:26.739567+0000 mgr.x (mgr.24751) 264 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:27.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:27 vm00 bash[17468]: audit 2026-03-09T18:35:27.104922+0000 mon.b (mon.2) 129 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:35:27.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:27 vm08 bash[17774]: cluster 2026-03-09T18:35:26.739567+0000 mgr.x (mgr.24751) 264 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:27.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:27 vm08 bash[17774]: audit 2026-03-09T18:35:27.104922+0000 mon.b (mon.2) 129 : audit [DBG] from='mgr.24751 192.168.123.108:0/588353564' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:35:28.776 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-09T18:35:29.193 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:35:29.193 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (5m) 2m ago 12m 15.9M - 0.25.0 c8568f914cd2 2a8a29ecee54 2026-03-09T18:35:29.193 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (5m) 2m ago 12m 39.5M - dad864ee21e9 b6a0baf6efb9 2026-03-09T18:35:29.193 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (5m) 2m ago 12m 42.4M - 3.5 e1d6a67b021e 68f4fe5b96ee 2026-03-09T18:35:29.193 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443,9283 running (7m) 2m ago 15m 533M - 19.2.3-678-ge911bdeb 654f31e6858e c24396cb6839 2026-03-09T18:35:29.193 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:8443,9283,8765 running (2m) 2m ago 16m 336M - 19.2.3-678-ge911bdeb 654f31e6858e 838266ced7b2 2026-03-09T18:35:29.193 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (16m) 2m ago 16m 55.8M 2048M 17.2.0 e1d6a67b021e 819e8890799a 2026-03-09T18:35:29.193 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (15m) 2m ago 15m 42.7M 2048M 17.2.0 e1d6a67b021e 5b51a6d0bbdd 2026-03-09T18:35:29.193 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (15m) 2m ago 15m 42.5M 2048M 17.2.0 e1d6a67b021e a82073bc5d9c 2026-03-09T18:35:29.193 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (5m) 2m ago 12m 7503k - 1.7.0 72c9c2088986 c2e3e3202fde 2026-03-09T18:35:29.193 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (5m) 2m ago 12m 7491k - 1.7.0 72c9c2088986 7a7d1ed8c801 2026-03-09T18:35:29.193 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (15m) 2m ago 15m 49.6M 4096M 17.2.0 e1d6a67b021e ab692a994bc3 2026-03-09T18:35:29.193 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (14m) 2m ago 14m 52.0M 4096M 17.2.0 e1d6a67b021e d8607e6df9d6 2026-03-09T18:35:29.193 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (14m) 2m ago 14m 46.8M 4096M 17.2.0 e1d6a67b021e 35e072ab4c22 2026-03-09T18:35:29.193 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (14m) 2m ago 14m 51.2M 4096M 17.2.0 e1d6a67b021e 306d680cc55b 2026-03-09T18:35:29.193 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (14m) 2m ago 14m 49.7M 4096M 17.2.0 e1d6a67b021e 781045b06a16 2026-03-09T18:35:29.193 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (13m) 2m ago 13m 49.8M 4096M 17.2.0 e1d6a67b021e 28b71e1b7c1b 2026-03-09T18:35:29.194 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (13m) 2m ago 13m 48.3M 4096M 17.2.0 e1d6a67b021e 80e1a58dd2f5 2026-03-09T18:35:29.194 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (13m) 2m ago 13m 48.7M 4096M 17.2.0 e1d6a67b021e 4f91765b51cf 2026-03-09T18:35:29.194 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (5m) 2m ago 12m 40.0M - 2.51.0 1d3b7f56885b 64bf8fcd1d5c 2026-03-09T18:35:29.194 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (12m) 2m ago 12m 84.8M - 17.2.0 e1d6a67b021e 671fa80b7e00 2026-03-09T18:35:29.194 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (12m) 2m ago 12m 85.2M - 17.2.0 e1d6a67b021e 1fbcce983317 2026-03-09T18:35:29.238 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions' 2026-03-09T18:35:29.677 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:35:29.677 INFO:teuthology.orchestra.run.vm00.stdout: "mon": { 2026-03-09T18:35:29.677 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-09T18:35:29.677 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:35:29.677 INFO:teuthology.orchestra.run.vm00.stdout: "mgr": { 2026-03-09T18:35:29.677 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:35:29.677 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:35:29.677 INFO:teuthology.orchestra.run.vm00.stdout: "osd": { 2026-03-09T18:35:29.677 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-09T18:35:29.677 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:35:29.677 INFO:teuthology.orchestra.run.vm00.stdout: "mds": {}, 2026-03-09T18:35:29.677 INFO:teuthology.orchestra.run.vm00.stdout: "rgw": { 2026-03-09T18:35:29.677 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-09T18:35:29.677 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:35:29.677 INFO:teuthology.orchestra.run.vm00.stdout: "overall": { 2026-03-09T18:35:29.677 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 13, 2026-03-09T18:35:29.677 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:35:29.677 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:35:29.677 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:35:29.722 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph health detail' 2026-03-09T18:35:29.950 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:29 vm00 bash[22468]: cluster 2026-03-09T18:35:28.740183+0000 mgr.x (mgr.24751) 265 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:29.950 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:29 vm00 bash[22468]: audit 2026-03-09T18:35:29.190205+0000 mgr.x (mgr.24751) 266 : audit [DBG] from='client.24886 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:35:29.950 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:29 vm00 bash[22468]: audit 2026-03-09T18:35:29.681510+0000 mon.c (mon.1) 140 : audit [DBG] from='client.? 192.168.123.100:0/4245066244' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:35:29.950 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:29 vm00 bash[17468]: cluster 2026-03-09T18:35:28.740183+0000 mgr.x (mgr.24751) 265 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:29.950 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:29 vm00 bash[17468]: audit 2026-03-09T18:35:29.190205+0000 mgr.x (mgr.24751) 266 : audit [DBG] from='client.24886 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:35:29.950 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:29 vm00 bash[17468]: audit 2026-03-09T18:35:29.681510+0000 mon.c (mon.1) 140 : audit [DBG] from='client.? 192.168.123.100:0/4245066244' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:35:30.175 INFO:teuthology.orchestra.run.vm00.stdout:HEALTH_OK 2026-03-09T18:35:30.220 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph -s' 2026-03-09T18:35:30.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:29 vm08 bash[17774]: cluster 2026-03-09T18:35:28.740183+0000 mgr.x (mgr.24751) 265 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:35:30.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:29 vm08 bash[17774]: audit 2026-03-09T18:35:29.190205+0000 mgr.x (mgr.24751) 266 : audit [DBG] from='client.24886 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:35:30.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:29 vm08 bash[17774]: audit 2026-03-09T18:35:29.681510+0000 mon.c (mon.1) 140 : audit [DBG] from='client.? 192.168.123.100:0/4245066244' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:35:30.653 INFO:teuthology.orchestra.run.vm00.stdout: cluster: 2026-03-09T18:35:30.653 INFO:teuthology.orchestra.run.vm00.stdout: id: 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:35:30.653 INFO:teuthology.orchestra.run.vm00.stdout: health: HEALTH_OK 2026-03-09T18:35:30.653 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:35:30.654 INFO:teuthology.orchestra.run.vm00.stdout: services: 2026-03-09T18:35:30.654 INFO:teuthology.orchestra.run.vm00.stdout: mon: 3 daemons, quorum a,c,b (age 15m) 2026-03-09T18:35:30.654 INFO:teuthology.orchestra.run.vm00.stdout: mgr: x(active, since 6m), standbys: y 2026-03-09T18:35:30.654 INFO:teuthology.orchestra.run.vm00.stdout: osd: 8 osds: 8 up (since 13m), 8 in (since 13m) 2026-03-09T18:35:30.654 INFO:teuthology.orchestra.run.vm00.stdout: rgw: 2 daemons active (2 hosts, 1 zones) 2026-03-09T18:35:30.654 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:35:30.654 INFO:teuthology.orchestra.run.vm00.stdout: data: 2026-03-09T18:35:30.654 INFO:teuthology.orchestra.run.vm00.stdout: pools: 6 pools, 161 pgs 2026-03-09T18:35:30.654 INFO:teuthology.orchestra.run.vm00.stdout: objects: 209 objects, 457 KiB 2026-03-09T18:35:30.654 INFO:teuthology.orchestra.run.vm00.stdout: usage: 96 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:35:30.654 INFO:teuthology.orchestra.run.vm00.stdout: pgs: 161 active+clean 2026-03-09T18:35:30.654 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:35:30.654 INFO:teuthology.orchestra.run.vm00.stdout: io: 2026-03-09T18:35:30.654 INFO:teuthology.orchestra.run.vm00.stdout: client: 1.2 KiB/s rd, 1 op/s rd, 0 op/s wr 2026-03-09T18:35:30.654 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:35:30.702 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph mgr fail' 2026-03-09T18:35:30.919 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:30 vm00 bash[22468]: audit 2026-03-09T18:35:30.180559+0000 mon.c (mon.1) 141 : audit [DBG] from='client.? 192.168.123.100:0/4253988029' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:35:30.919 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:30 vm00 bash[22468]: audit 2026-03-09T18:35:30.658534+0000 mon.a (mon.0) 877 : audit [DBG] from='client.? 192.168.123.100:0/2050965813' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T18:35:30.919 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:30 vm00 bash[17468]: audit 2026-03-09T18:35:30.180559+0000 mon.c (mon.1) 141 : audit [DBG] from='client.? 192.168.123.100:0/4253988029' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:35:30.919 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:30 vm00 bash[17468]: audit 2026-03-09T18:35:30.658534+0000 mon.a (mon.0) 877 : audit [DBG] from='client.? 192.168.123.100:0/2050965813' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T18:35:31.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:30 vm08 bash[17774]: audit 2026-03-09T18:35:30.180559+0000 mon.c (mon.1) 141 : audit [DBG] from='client.? 192.168.123.100:0/4253988029' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:35:31.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:30 vm08 bash[17774]: audit 2026-03-09T18:35:30.658534+0000 mon.a (mon.0) 877 : audit [DBG] from='client.? 192.168.123.100:0/2050965813' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T18:35:31.921 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'sleep 180' 2026-03-09T18:35:32.072 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:31 vm00 bash[22468]: cluster 2026-03-09T18:35:30.740531+0000 mgr.x (mgr.24751) 267 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:32.072 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:31 vm00 bash[22468]: audit 2026-03-09T18:35:31.153124+0000 mon.c (mon.1) 142 : audit [INF] from='client.? 192.168.123.100:0/75271305' entity='client.admin' cmd=[{"prefix": "mgr fail"}]: dispatch 2026-03-09T18:35:32.072 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:31 vm00 bash[22468]: audit 2026-03-09T18:35:31.153419+0000 mon.a (mon.0) 878 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "mgr fail"}]: dispatch 2026-03-09T18:35:32.072 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:31 vm00 bash[22468]: cluster 2026-03-09T18:35:31.158911+0000 mon.a (mon.0) 879 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T18:35:32.072 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:31 vm00 bash[22468]: audit 2026-03-09T18:35:31.399468+0000 mgr.x (mgr.24751) 268 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:32.072 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:31 vm00 bash[17468]: cluster 2026-03-09T18:35:30.740531+0000 mgr.x (mgr.24751) 267 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:32.072 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:31 vm00 bash[17468]: audit 2026-03-09T18:35:31.153124+0000 mon.c (mon.1) 142 : audit [INF] from='client.? 192.168.123.100:0/75271305' entity='client.admin' cmd=[{"prefix": "mgr fail"}]: dispatch 2026-03-09T18:35:32.072 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:31 vm00 bash[17468]: audit 2026-03-09T18:35:31.153419+0000 mon.a (mon.0) 878 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "mgr fail"}]: dispatch 2026-03-09T18:35:32.072 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:31 vm00 bash[17468]: cluster 2026-03-09T18:35:31.158911+0000 mon.a (mon.0) 879 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T18:35:32.072 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:31 vm00 bash[17468]: audit 2026-03-09T18:35:31.399468+0000 mgr.x (mgr.24751) 268 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:32.072 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:35:31 vm00 bash[53976]: [09/Mar/2026:18:35:31] ENGINE Bus STOPPING 2026-03-09T18:35:32.082 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:31 vm08 bash[17774]: cluster 2026-03-09T18:35:30.740531+0000 mgr.x (mgr.24751) 267 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:32.082 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:31 vm08 bash[17774]: audit 2026-03-09T18:35:31.153124+0000 mon.c (mon.1) 142 : audit [INF] from='client.? 192.168.123.100:0/75271305' entity='client.admin' cmd=[{"prefix": "mgr fail"}]: dispatch 2026-03-09T18:35:32.082 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:31 vm08 bash[17774]: audit 2026-03-09T18:35:31.153419+0000 mon.a (mon.0) 878 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "mgr fail"}]: dispatch 2026-03-09T18:35:32.082 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:31 vm08 bash[17774]: cluster 2026-03-09T18:35:31.158911+0000 mon.a (mon.0) 879 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T18:35:32.082 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:31 vm08 bash[17774]: audit 2026-03-09T18:35:31.399468+0000 mgr.x (mgr.24751) 268 : audit [DBG] from='client.14940 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:32.082 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:31 vm08 bash[36576]: ignoring --setuser ceph since I am not root 2026-03-09T18:35:32.082 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:31 vm08 bash[36576]: ignoring --setgroup ceph since I am not root 2026-03-09T18:35:32.082 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:31 vm08 bash[36576]: debug 2026-03-09T18:35:31.938+0000 7ff59cd2a140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T18:35:32.082 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:31 vm08 bash[36576]: debug 2026-03-09T18:35:31.970+0000 7ff59cd2a140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T18:35:32.178 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:35:32 vm00 bash[53976]: [09/Mar/2026:18:35:32] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T18:35:32.178 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:35:32 vm00 bash[53976]: [09/Mar/2026:18:35:32] ENGINE Bus STOPPED 2026-03-09T18:35:32.342 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:32 vm08 bash[36576]: debug 2026-03-09T18:35:32.078+0000 7ff59cd2a140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T18:35:32.377 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:35:32 vm00 bash[53976]: [09/Mar/2026:18:35:32] ENGINE Bus STARTING 2026-03-09T18:35:32.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:35:32 vm00 bash[53976]: [09/Mar/2026:18:35:32] ENGINE Serving on http://:::9283 2026-03-09T18:35:32.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:35:32 vm00 bash[53976]: [09/Mar/2026:18:35:32] ENGINE Bus STARTED 2026-03-09T18:35:32.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:32 vm08 bash[36576]: debug 2026-03-09T18:35:32.338+0000 7ff59cd2a140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T18:35:33.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:32 vm00 bash[22468]: audit 2026-03-09T18:35:31.839966+0000 mon.a (mon.0) 880 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished 2026-03-09T18:35:33.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:32 vm00 bash[22468]: cluster 2026-03-09T18:35:31.840140+0000 mon.a (mon.0) 881 : cluster [DBG] mgrmap e28: y(active, starting, since 0.685479s) 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:32 vm00 bash[22468]: audit 2026-03-09T18:35:31.843020+0000 mon.a (mon.0) 882 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:32 vm00 bash[22468]: audit 2026-03-09T18:35:31.843088+0000 mon.a (mon.0) 883 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:32 vm00 bash[22468]: audit 2026-03-09T18:35:31.843140+0000 mon.a (mon.0) 884 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:32 vm00 bash[22468]: audit 2026-03-09T18:35:31.844227+0000 mon.a (mon.0) 885 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:32 vm00 bash[22468]: audit 2026-03-09T18:35:31.844449+0000 mon.a (mon.0) 886 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:32 vm00 bash[22468]: audit 2026-03-09T18:35:31.844552+0000 mon.a (mon.0) 887 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:32 vm00 bash[22468]: audit 2026-03-09T18:35:31.844622+0000 mon.a (mon.0) 888 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:32 vm00 bash[22468]: audit 2026-03-09T18:35:31.844716+0000 mon.a (mon.0) 889 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:32 vm00 bash[22468]: audit 2026-03-09T18:35:31.844781+0000 mon.a (mon.0) 890 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:32 vm00 bash[22468]: audit 2026-03-09T18:35:31.844857+0000 mon.a (mon.0) 891 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:32 vm00 bash[22468]: audit 2026-03-09T18:35:31.844925+0000 mon.a (mon.0) 892 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:32 vm00 bash[22468]: audit 2026-03-09T18:35:31.845064+0000 mon.a (mon.0) 893 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:32 vm00 bash[22468]: audit 2026-03-09T18:35:31.845737+0000 mon.a (mon.0) 894 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:32 vm00 bash[22468]: audit 2026-03-09T18:35:31.845810+0000 mon.a (mon.0) 895 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:32 vm00 bash[22468]: audit 2026-03-09T18:35:31.846025+0000 mon.a (mon.0) 896 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:32 vm00 bash[22468]: cluster 2026-03-09T18:35:32.154225+0000 mon.a (mon.0) 897 : cluster [INF] Manager daemon y is now available 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:32 vm00 bash[22468]: audit 2026-03-09T18:35:32.184864+0000 mon.a (mon.0) 898 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:32 vm00 bash[22468]: audit 2026-03-09T18:35:32.201637+0000 mon.a (mon.0) 899 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:32 vm00 bash[22468]: audit 2026-03-09T18:35:32.206341+0000 mon.a (mon.0) 900 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:32 vm00 bash[22468]: audit 2026-03-09T18:35:32.239715+0000 mon.a (mon.0) 901 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:32 vm00 bash[17468]: audit 2026-03-09T18:35:31.839966+0000 mon.a (mon.0) 880 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:32 vm00 bash[17468]: cluster 2026-03-09T18:35:31.840140+0000 mon.a (mon.0) 881 : cluster [DBG] mgrmap e28: y(active, starting, since 0.685479s) 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:32 vm00 bash[17468]: audit 2026-03-09T18:35:31.843020+0000 mon.a (mon.0) 882 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:32 vm00 bash[17468]: audit 2026-03-09T18:35:31.843088+0000 mon.a (mon.0) 883 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:32 vm00 bash[17468]: audit 2026-03-09T18:35:31.843140+0000 mon.a (mon.0) 884 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:32 vm00 bash[17468]: audit 2026-03-09T18:35:31.844227+0000 mon.a (mon.0) 885 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:32 vm00 bash[17468]: audit 2026-03-09T18:35:31.844449+0000 mon.a (mon.0) 886 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:32 vm00 bash[17468]: audit 2026-03-09T18:35:31.844552+0000 mon.a (mon.0) 887 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:32 vm00 bash[17468]: audit 2026-03-09T18:35:31.844622+0000 mon.a (mon.0) 888 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:32 vm00 bash[17468]: audit 2026-03-09T18:35:31.844716+0000 mon.a (mon.0) 889 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:32 vm00 bash[17468]: audit 2026-03-09T18:35:31.844781+0000 mon.a (mon.0) 890 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:32 vm00 bash[17468]: audit 2026-03-09T18:35:31.844857+0000 mon.a (mon.0) 891 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:32 vm00 bash[17468]: audit 2026-03-09T18:35:31.844925+0000 mon.a (mon.0) 892 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:32 vm00 bash[17468]: audit 2026-03-09T18:35:31.845064+0000 mon.a (mon.0) 893 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:32 vm00 bash[17468]: audit 2026-03-09T18:35:31.845737+0000 mon.a (mon.0) 894 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:32 vm00 bash[17468]: audit 2026-03-09T18:35:31.845810+0000 mon.a (mon.0) 895 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:32 vm00 bash[17468]: audit 2026-03-09T18:35:31.846025+0000 mon.a (mon.0) 896 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:32 vm00 bash[17468]: cluster 2026-03-09T18:35:32.154225+0000 mon.a (mon.0) 897 : cluster [INF] Manager daemon y is now available 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:32 vm00 bash[17468]: audit 2026-03-09T18:35:32.184864+0000 mon.a (mon.0) 898 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:32 vm00 bash[17468]: audit 2026-03-09T18:35:32.201637+0000 mon.a (mon.0) 899 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:32 vm00 bash[17468]: audit 2026-03-09T18:35:32.206341+0000 mon.a (mon.0) 900 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:35:33.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:32 vm00 bash[17468]: audit 2026-03-09T18:35:32.239715+0000 mon.a (mon.0) 901 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:35:33.152 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:32 vm08 bash[17774]: audit 2026-03-09T18:35:31.839966+0000 mon.a (mon.0) 880 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished 2026-03-09T18:35:33.152 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:32 vm08 bash[17774]: cluster 2026-03-09T18:35:31.840140+0000 mon.a (mon.0) 881 : cluster [DBG] mgrmap e28: y(active, starting, since 0.685479s) 2026-03-09T18:35:33.152 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:32 vm08 bash[17774]: audit 2026-03-09T18:35:31.843020+0000 mon.a (mon.0) 882 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:35:33.152 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:32 vm08 bash[17774]: audit 2026-03-09T18:35:31.843088+0000 mon.a (mon.0) 883 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:35:33.152 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:32 vm08 bash[17774]: audit 2026-03-09T18:35:31.843140+0000 mon.a (mon.0) 884 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:35:33.152 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:32 vm08 bash[17774]: audit 2026-03-09T18:35:31.844227+0000 mon.a (mon.0) 885 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:35:33.152 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:32 vm08 bash[17774]: audit 2026-03-09T18:35:31.844449+0000 mon.a (mon.0) 886 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:35:33.152 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:32 vm08 bash[17774]: audit 2026-03-09T18:35:31.844552+0000 mon.a (mon.0) 887 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:35:33.152 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:32 vm08 bash[17774]: audit 2026-03-09T18:35:31.844622+0000 mon.a (mon.0) 888 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:35:33.152 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:32 vm08 bash[17774]: audit 2026-03-09T18:35:31.844716+0000 mon.a (mon.0) 889 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:35:33.152 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:32 vm08 bash[17774]: audit 2026-03-09T18:35:31.844781+0000 mon.a (mon.0) 890 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:35:33.152 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:32 vm08 bash[17774]: audit 2026-03-09T18:35:31.844857+0000 mon.a (mon.0) 891 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:35:33.152 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:32 vm08 bash[17774]: audit 2026-03-09T18:35:31.844925+0000 mon.a (mon.0) 892 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:35:33.152 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:32 vm08 bash[17774]: audit 2026-03-09T18:35:31.845064+0000 mon.a (mon.0) 893 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:35:33.152 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:32 vm08 bash[17774]: audit 2026-03-09T18:35:31.845737+0000 mon.a (mon.0) 894 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:35:33.152 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:32 vm08 bash[17774]: audit 2026-03-09T18:35:31.845810+0000 mon.a (mon.0) 895 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:35:33.152 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:32 vm08 bash[17774]: audit 2026-03-09T18:35:31.846025+0000 mon.a (mon.0) 896 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:35:33.152 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:32 vm08 bash[17774]: cluster 2026-03-09T18:35:32.154225+0000 mon.a (mon.0) 897 : cluster [INF] Manager daemon y is now available 2026-03-09T18:35:33.152 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:32 vm08 bash[17774]: audit 2026-03-09T18:35:32.184864+0000 mon.a (mon.0) 898 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:35:33.152 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:32 vm08 bash[17774]: audit 2026-03-09T18:35:32.201637+0000 mon.a (mon.0) 899 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:35:33.152 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:32 vm08 bash[17774]: audit 2026-03-09T18:35:32.206341+0000 mon.a (mon.0) 900 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:35:33.152 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:32 vm08 bash[17774]: audit 2026-03-09T18:35:32.239715+0000 mon.a (mon.0) 901 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:35:33.152 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:32 vm08 bash[36576]: debug 2026-03-09T18:35:32.782+0000 7ff59cd2a140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T18:35:33.152 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:32 vm08 bash[36576]: debug 2026-03-09T18:35:32.898+0000 7ff59cd2a140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T18:35:33.152 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:33 vm08 bash[36576]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T18:35:33.152 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:33 vm08 bash[36576]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T18:35:33.152 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:33 vm08 bash[36576]: from numpy import show_config as show_numpy_config 2026-03-09T18:35:33.152 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:33 vm08 bash[36576]: debug 2026-03-09T18:35:33.022+0000 7ff59cd2a140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T18:35:33.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:33 vm08 bash[36576]: debug 2026-03-09T18:35:33.150+0000 7ff59cd2a140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T18:35:33.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:33 vm08 bash[36576]: debug 2026-03-09T18:35:33.182+0000 7ff59cd2a140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T18:35:33.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:33 vm08 bash[36576]: debug 2026-03-09T18:35:33.214+0000 7ff59cd2a140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T18:35:33.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:33 vm08 bash[36576]: debug 2026-03-09T18:35:33.254+0000 7ff59cd2a140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T18:35:33.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:33 vm08 bash[36576]: debug 2026-03-09T18:35:33.298+0000 7ff59cd2a140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T18:35:33.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:33 vm08 bash[17774]: cluster 2026-03-09T18:35:32.861865+0000 mon.a (mon.0) 902 : cluster [DBG] mgrmap e29: y(active, since 1.7072s) 2026-03-09T18:35:33.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:33 vm08 bash[17774]: cephadm 2026-03-09T18:35:33.005949+0000 mgr.y (mgr.24880) 2 : cephadm [INF] [09/Mar/2026:18:35:33] ENGINE Bus STARTING 2026-03-09T18:35:33.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:33 vm08 bash[17774]: cephadm 2026-03-09T18:35:33.107169+0000 mgr.y (mgr.24880) 3 : cephadm [INF] [09/Mar/2026:18:35:33] ENGINE Serving on http://192.168.123.100:8765 2026-03-09T18:35:33.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:33 vm08 bash[17774]: cephadm 2026-03-09T18:35:33.214938+0000 mgr.y (mgr.24880) 4 : cephadm [INF] [09/Mar/2026:18:35:33] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T18:35:33.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:33 vm08 bash[17774]: cephadm 2026-03-09T18:35:33.214973+0000 mgr.y (mgr.24880) 5 : cephadm [INF] [09/Mar/2026:18:35:33] ENGINE Bus STARTED 2026-03-09T18:35:33.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:33 vm08 bash[17774]: cephadm 2026-03-09T18:35:33.215375+0000 mgr.y (mgr.24880) 6 : cephadm [INF] [09/Mar/2026:18:35:33] ENGINE Client ('192.168.123.100', 46534) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:35:33.974 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:33 vm08 bash[36576]: debug 2026-03-09T18:35:33.694+0000 7ff59cd2a140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T18:35:33.975 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:33 vm08 bash[36576]: debug 2026-03-09T18:35:33.726+0000 7ff59cd2a140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T18:35:33.975 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:33 vm08 bash[36576]: debug 2026-03-09T18:35:33.758+0000 7ff59cd2a140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T18:35:33.975 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:33 vm08 bash[36576]: debug 2026-03-09T18:35:33.902+0000 7ff59cd2a140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T18:35:33.975 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:33 vm08 bash[36576]: debug 2026-03-09T18:35:33.942+0000 7ff59cd2a140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T18:35:34.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:33 vm00 bash[22468]: cluster 2026-03-09T18:35:32.861865+0000 mon.a (mon.0) 902 : cluster [DBG] mgrmap e29: y(active, since 1.7072s) 2026-03-09T18:35:34.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:33 vm00 bash[22468]: cephadm 2026-03-09T18:35:33.005949+0000 mgr.y (mgr.24880) 2 : cephadm [INF] [09/Mar/2026:18:35:33] ENGINE Bus STARTING 2026-03-09T18:35:34.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:33 vm00 bash[22468]: cephadm 2026-03-09T18:35:33.107169+0000 mgr.y (mgr.24880) 3 : cephadm [INF] [09/Mar/2026:18:35:33] ENGINE Serving on http://192.168.123.100:8765 2026-03-09T18:35:34.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:33 vm00 bash[22468]: cephadm 2026-03-09T18:35:33.214938+0000 mgr.y (mgr.24880) 4 : cephadm [INF] [09/Mar/2026:18:35:33] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T18:35:34.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:33 vm00 bash[22468]: cephadm 2026-03-09T18:35:33.214973+0000 mgr.y (mgr.24880) 5 : cephadm [INF] [09/Mar/2026:18:35:33] ENGINE Bus STARTED 2026-03-09T18:35:34.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:33 vm00 bash[22468]: cephadm 2026-03-09T18:35:33.215375+0000 mgr.y (mgr.24880) 6 : cephadm [INF] [09/Mar/2026:18:35:33] ENGINE Client ('192.168.123.100', 46534) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:35:34.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:33 vm00 bash[17468]: cluster 2026-03-09T18:35:32.861865+0000 mon.a (mon.0) 902 : cluster [DBG] mgrmap e29: y(active, since 1.7072s) 2026-03-09T18:35:34.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:33 vm00 bash[17468]: cephadm 2026-03-09T18:35:33.005949+0000 mgr.y (mgr.24880) 2 : cephadm [INF] [09/Mar/2026:18:35:33] ENGINE Bus STARTING 2026-03-09T18:35:34.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:33 vm00 bash[17468]: cephadm 2026-03-09T18:35:33.107169+0000 mgr.y (mgr.24880) 3 : cephadm [INF] [09/Mar/2026:18:35:33] ENGINE Serving on http://192.168.123.100:8765 2026-03-09T18:35:34.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:33 vm00 bash[17468]: cephadm 2026-03-09T18:35:33.214938+0000 mgr.y (mgr.24880) 4 : cephadm [INF] [09/Mar/2026:18:35:33] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T18:35:34.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:33 vm00 bash[17468]: cephadm 2026-03-09T18:35:33.214973+0000 mgr.y (mgr.24880) 5 : cephadm [INF] [09/Mar/2026:18:35:33] ENGINE Bus STARTED 2026-03-09T18:35:34.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:33 vm00 bash[17468]: cephadm 2026-03-09T18:35:33.215375+0000 mgr.y (mgr.24880) 6 : cephadm [INF] [09/Mar/2026:18:35:33] ENGINE Client ('192.168.123.100', 46534) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:35:34.242 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:33 vm08 bash[36576]: debug 2026-03-09T18:35:33.978+0000 7ff59cd2a140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T18:35:34.242 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:34 vm08 bash[36576]: debug 2026-03-09T18:35:34.082+0000 7ff59cd2a140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:35:34.617 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:34 vm08 bash[36576]: debug 2026-03-09T18:35:34.238+0000 7ff59cd2a140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T18:35:34.617 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:34 vm08 bash[36576]: debug 2026-03-09T18:35:34.398+0000 7ff59cd2a140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T18:35:34.617 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:34 vm08 bash[36576]: debug 2026-03-09T18:35:34.434+0000 7ff59cd2a140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T18:35:34.617 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:34 vm08 bash[36576]: debug 2026-03-09T18:35:34.474+0000 7ff59cd2a140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T18:35:34.936 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:34 vm08 bash[17774]: cluster 2026-03-09T18:35:33.845990+0000 mgr.y (mgr.24880) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:35:34.937 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:34 vm08 bash[17774]: cluster 2026-03-09T18:35:33.868594+0000 mon.a (mon.0) 903 : cluster [DBG] mgrmap e30: y(active, since 2s) 2026-03-09T18:35:34.937 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:34 vm08 bash[17774]: audit 2026-03-09T18:35:34.829215+0000 mon.b (mon.2) 130 : audit [DBG] from='mgr.? 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:35:34.937 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:34 vm08 bash[17774]: audit 2026-03-09T18:35:34.829658+0000 mon.b (mon.2) 131 : audit [DBG] from='mgr.? 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:35:34.937 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:34 vm08 bash[17774]: audit 2026-03-09T18:35:34.830746+0000 mon.b (mon.2) 132 : audit [DBG] from='mgr.? 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:35:34.937 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:34 vm08 bash[17774]: audit 2026-03-09T18:35:34.831050+0000 mon.b (mon.2) 133 : audit [DBG] from='mgr.? 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:35:34.937 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:34 vm08 bash[17774]: cluster 2026-03-09T18:35:34.833785+0000 mon.a (mon.0) 904 : cluster [DBG] Standby manager daemon x started 2026-03-09T18:35:34.937 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:34 vm08 bash[36576]: debug 2026-03-09T18:35:34.614+0000 7ff59cd2a140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:35:34.937 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:34 vm08 bash[36576]: debug 2026-03-09T18:35:34.822+0000 7ff59cd2a140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T18:35:34.937 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:34 vm08 bash[36576]: [09/Mar/2026:18:35:34] ENGINE Bus STARTING 2026-03-09T18:35:34.937 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:34 vm08 bash[36576]: CherryPy Checker: 2026-03-09T18:35:34.937 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:34 vm08 bash[36576]: The Application mounted at '' has an empty config. 2026-03-09T18:35:35.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:34 vm00 bash[22468]: cluster 2026-03-09T18:35:33.845990+0000 mgr.y (mgr.24880) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:35:35.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:34 vm00 bash[22468]: cluster 2026-03-09T18:35:33.868594+0000 mon.a (mon.0) 903 : cluster [DBG] mgrmap e30: y(active, since 2s) 2026-03-09T18:35:35.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:34 vm00 bash[22468]: audit 2026-03-09T18:35:34.829215+0000 mon.b (mon.2) 130 : audit [DBG] from='mgr.? 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:35:35.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:34 vm00 bash[22468]: audit 2026-03-09T18:35:34.829658+0000 mon.b (mon.2) 131 : audit [DBG] from='mgr.? 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:35:35.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:34 vm00 bash[22468]: audit 2026-03-09T18:35:34.830746+0000 mon.b (mon.2) 132 : audit [DBG] from='mgr.? 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:35:35.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:34 vm00 bash[22468]: audit 2026-03-09T18:35:34.831050+0000 mon.b (mon.2) 133 : audit [DBG] from='mgr.? 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:35:35.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:34 vm00 bash[22468]: cluster 2026-03-09T18:35:34.833785+0000 mon.a (mon.0) 904 : cluster [DBG] Standby manager daemon x started 2026-03-09T18:35:35.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:34 vm00 bash[17468]: cluster 2026-03-09T18:35:33.845990+0000 mgr.y (mgr.24880) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:35:35.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:34 vm00 bash[17468]: cluster 2026-03-09T18:35:33.868594+0000 mon.a (mon.0) 903 : cluster [DBG] mgrmap e30: y(active, since 2s) 2026-03-09T18:35:35.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:34 vm00 bash[17468]: audit 2026-03-09T18:35:34.829215+0000 mon.b (mon.2) 130 : audit [DBG] from='mgr.? 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:35:35.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:34 vm00 bash[17468]: audit 2026-03-09T18:35:34.829658+0000 mon.b (mon.2) 131 : audit [DBG] from='mgr.? 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:35:35.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:34 vm00 bash[17468]: audit 2026-03-09T18:35:34.830746+0000 mon.b (mon.2) 132 : audit [DBG] from='mgr.? 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:35:35.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:34 vm00 bash[17468]: audit 2026-03-09T18:35:34.831050+0000 mon.b (mon.2) 133 : audit [DBG] from='mgr.? 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:35:35.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:34 vm00 bash[17468]: cluster 2026-03-09T18:35:34.833785+0000 mon.a (mon.0) 904 : cluster [DBG] Standby manager daemon x started 2026-03-09T18:35:35.224 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:34 vm08 bash[36576]: [09/Mar/2026:18:35:34] ENGINE Serving on http://:::9283 2026-03-09T18:35:35.224 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:35:34 vm08 bash[36576]: [09/Mar/2026:18:35:34] ENGINE Bus STARTED 2026-03-09T18:35:36.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:35 vm00 bash[22468]: cluster 2026-03-09T18:35:34.885404+0000 mon.a (mon.0) 905 : cluster [DBG] mgrmap e31: y(active, since 3s), standbys: x 2026-03-09T18:35:36.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:35 vm00 bash[22468]: audit 2026-03-09T18:35:34.885547+0000 mon.a (mon.0) 906 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:35:36.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:35 vm00 bash[17468]: cluster 2026-03-09T18:35:34.885404+0000 mon.a (mon.0) 905 : cluster [DBG] mgrmap e31: y(active, since 3s), standbys: x 2026-03-09T18:35:36.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:35 vm00 bash[17468]: audit 2026-03-09T18:35:34.885547+0000 mon.a (mon.0) 906 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:35:36.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:35 vm08 bash[17774]: cluster 2026-03-09T18:35:34.885404+0000 mon.a (mon.0) 905 : cluster [DBG] mgrmap e31: y(active, since 3s), standbys: x 2026-03-09T18:35:36.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:35 vm08 bash[17774]: audit 2026-03-09T18:35:34.885547+0000 mon.a (mon.0) 906 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:35:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:36 vm08 bash[17774]: cluster 2026-03-09T18:35:35.846282+0000 mgr.y (mgr.24880) 8 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:35:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:36 vm08 bash[17774]: cluster 2026-03-09T18:35:35.887621+0000 mon.a (mon.0) 907 : cluster [DBG] mgrmap e32: y(active, since 4s), standbys: x 2026-03-09T18:35:37.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:36 vm00 bash[22468]: cluster 2026-03-09T18:35:35.846282+0000 mgr.y (mgr.24880) 8 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:35:37.377 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:36 vm00 bash[22468]: cluster 2026-03-09T18:35:35.887621+0000 mon.a (mon.0) 907 : cluster [DBG] mgrmap e32: y(active, since 4s), standbys: x 2026-03-09T18:35:37.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:36 vm00 bash[17468]: cluster 2026-03-09T18:35:35.846282+0000 mgr.y (mgr.24880) 8 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:35:37.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:36 vm00 bash[17468]: cluster 2026-03-09T18:35:35.887621+0000 mon.a (mon.0) 907 : cluster [DBG] mgrmap e32: y(active, since 4s), standbys: x 2026-03-09T18:35:39.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:38 vm00 bash[17468]: cluster 2026-03-09T18:35:37.846582+0000 mgr.y (mgr.24880) 9 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:35:39.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:38 vm00 bash[17468]: audit 2026-03-09T18:35:37.988439+0000 mon.a (mon.0) 908 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:38 vm00 bash[17468]: audit 2026-03-09T18:35:37.993814+0000 mon.a (mon.0) 909 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:38 vm00 bash[17468]: audit 2026-03-09T18:35:38.056316+0000 mon.a (mon.0) 910 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:38 vm00 bash[17468]: audit 2026-03-09T18:35:38.061589+0000 mon.a (mon.0) 911 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:38 vm00 bash[17468]: audit 2026-03-09T18:35:38.529668+0000 mon.a (mon.0) 912 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:38 vm00 bash[17468]: audit 2026-03-09T18:35:38.555365+0000 mon.a (mon.0) 913 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:38 vm00 bash[17468]: audit 2026-03-09T18:35:38.558063+0000 mon.a (mon.0) 914 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:35:39.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:38 vm00 bash[17468]: audit 2026-03-09T18:35:38.619559+0000 mon.a (mon.0) 915 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:38 vm00 bash[17468]: audit 2026-03-09T18:35:38.625845+0000 mon.a (mon.0) 916 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:38 vm00 bash[17468]: audit 2026-03-09T18:35:38.627479+0000 mon.a (mon.0) 917 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:35:39.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:38 vm00 bash[17468]: audit 2026-03-09T18:35:38.628157+0000 mon.a (mon.0) 918 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:35:39.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:38 vm00 bash[17468]: audit 2026-03-09T18:35:38.628568+0000 mon.a (mon.0) 919 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:35:39.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:38 vm00 bash[17468]: cephadm 2026-03-09T18:35:38.629217+0000 mgr.y (mgr.24880) 10 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:35:39.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:38 vm00 bash[17468]: cephadm 2026-03-09T18:35:38.629357+0000 mgr.y (mgr.24880) 11 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T18:35:39.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:38 vm00 bash[17468]: cephadm 2026-03-09T18:35:38.661643+0000 mgr.y (mgr.24880) 12 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:35:39.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:38 vm00 bash[17468]: cephadm 2026-03-09T18:35:38.664081+0000 mgr.y (mgr.24880) 13 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:35:39.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:38 vm00 bash[17468]: cephadm 2026-03-09T18:35:38.692804+0000 mgr.y (mgr.24880) 14 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:35:39.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:38 vm00 bash[17468]: cephadm 2026-03-09T18:35:38.695983+0000 mgr.y (mgr.24880) 15 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:35:39.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:38 vm00 bash[17468]: cephadm 2026-03-09T18:35:38.724659+0000 mgr.y (mgr.24880) 16 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:35:39.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:38 vm00 bash[17468]: cephadm 2026-03-09T18:35:38.729548+0000 mgr.y (mgr.24880) 17 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:35:39.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:38 vm00 bash[17468]: audit 2026-03-09T18:35:38.762264+0000 mon.a (mon.0) 920 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:38 vm00 bash[17468]: audit 2026-03-09T18:35:38.767643+0000 mon.a (mon.0) 921 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:38 vm00 bash[17468]: audit 2026-03-09T18:35:38.771611+0000 mon.a (mon.0) 922 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:38 vm00 bash[17468]: audit 2026-03-09T18:35:38.779992+0000 mon.a (mon.0) 923 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:38 vm00 bash[17468]: audit 2026-03-09T18:35:38.785030+0000 mon.a (mon.0) 924 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:38 vm00 bash[17468]: audit 2026-03-09T18:35:38.799678+0000 mon.a (mon.0) 925 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:35:39.315 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:38 vm00 bash[17468]: audit 2026-03-09T18:35:38.803239+0000 mon.a (mon.0) 926 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:35:39.316 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:38 vm00 bash[22468]: cluster 2026-03-09T18:35:37.846582+0000 mgr.y (mgr.24880) 9 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:35:39.316 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:38 vm00 bash[22468]: audit 2026-03-09T18:35:37.988439+0000 mon.a (mon.0) 908 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.316 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:38 vm00 bash[22468]: audit 2026-03-09T18:35:37.993814+0000 mon.a (mon.0) 909 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.316 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:38 vm00 bash[22468]: audit 2026-03-09T18:35:38.056316+0000 mon.a (mon.0) 910 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.316 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:38 vm00 bash[22468]: audit 2026-03-09T18:35:38.061589+0000 mon.a (mon.0) 911 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.316 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:38 vm00 bash[22468]: audit 2026-03-09T18:35:38.529668+0000 mon.a (mon.0) 912 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.316 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:38 vm00 bash[22468]: audit 2026-03-09T18:35:38.555365+0000 mon.a (mon.0) 913 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.316 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:38 vm00 bash[22468]: audit 2026-03-09T18:35:38.558063+0000 mon.a (mon.0) 914 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:35:39.316 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:38 vm00 bash[22468]: audit 2026-03-09T18:35:38.619559+0000 mon.a (mon.0) 915 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.316 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:38 vm00 bash[22468]: audit 2026-03-09T18:35:38.625845+0000 mon.a (mon.0) 916 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.316 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:38 vm00 bash[22468]: audit 2026-03-09T18:35:38.627479+0000 mon.a (mon.0) 917 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:35:39.316 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:38 vm00 bash[22468]: audit 2026-03-09T18:35:38.628157+0000 mon.a (mon.0) 918 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:35:39.316 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:38 vm00 bash[22468]: audit 2026-03-09T18:35:38.628568+0000 mon.a (mon.0) 919 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:35:39.316 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:38 vm00 bash[22468]: cephadm 2026-03-09T18:35:38.629217+0000 mgr.y (mgr.24880) 10 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:35:39.316 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:38 vm00 bash[22468]: cephadm 2026-03-09T18:35:38.629357+0000 mgr.y (mgr.24880) 11 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T18:35:39.316 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:38 vm00 bash[22468]: cephadm 2026-03-09T18:35:38.661643+0000 mgr.y (mgr.24880) 12 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:35:39.316 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:38 vm00 bash[22468]: cephadm 2026-03-09T18:35:38.664081+0000 mgr.y (mgr.24880) 13 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:35:39.316 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:38 vm00 bash[22468]: cephadm 2026-03-09T18:35:38.692804+0000 mgr.y (mgr.24880) 14 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:35:39.316 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:38 vm00 bash[22468]: cephadm 2026-03-09T18:35:38.695983+0000 mgr.y (mgr.24880) 15 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:35:39.316 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:38 vm00 bash[22468]: cephadm 2026-03-09T18:35:38.724659+0000 mgr.y (mgr.24880) 16 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:35:39.316 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:38 vm00 bash[22468]: cephadm 2026-03-09T18:35:38.729548+0000 mgr.y (mgr.24880) 17 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:35:39.316 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:38 vm00 bash[22468]: audit 2026-03-09T18:35:38.762264+0000 mon.a (mon.0) 920 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.316 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:38 vm00 bash[22468]: audit 2026-03-09T18:35:38.767643+0000 mon.a (mon.0) 921 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.316 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:38 vm00 bash[22468]: audit 2026-03-09T18:35:38.771611+0000 mon.a (mon.0) 922 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.316 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:38 vm00 bash[22468]: audit 2026-03-09T18:35:38.779992+0000 mon.a (mon.0) 923 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.316 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:38 vm00 bash[22468]: audit 2026-03-09T18:35:38.785030+0000 mon.a (mon.0) 924 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.316 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:38 vm00 bash[22468]: audit 2026-03-09T18:35:38.799678+0000 mon.a (mon.0) 925 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:35:39.316 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:38 vm00 bash[22468]: audit 2026-03-09T18:35:38.803239+0000 mon.a (mon.0) 926 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:35:39.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:38 vm08 bash[17774]: cluster 2026-03-09T18:35:37.846582+0000 mgr.y (mgr.24880) 9 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:35:39.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:38 vm08 bash[17774]: audit 2026-03-09T18:35:37.988439+0000 mon.a (mon.0) 908 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:38 vm08 bash[17774]: audit 2026-03-09T18:35:37.993814+0000 mon.a (mon.0) 909 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:38 vm08 bash[17774]: audit 2026-03-09T18:35:38.056316+0000 mon.a (mon.0) 910 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:38 vm08 bash[17774]: audit 2026-03-09T18:35:38.061589+0000 mon.a (mon.0) 911 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:38 vm08 bash[17774]: audit 2026-03-09T18:35:38.529668+0000 mon.a (mon.0) 912 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:38 vm08 bash[17774]: audit 2026-03-09T18:35:38.555365+0000 mon.a (mon.0) 913 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:38 vm08 bash[17774]: audit 2026-03-09T18:35:38.558063+0000 mon.a (mon.0) 914 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:35:39.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:38 vm08 bash[17774]: audit 2026-03-09T18:35:38.619559+0000 mon.a (mon.0) 915 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:38 vm08 bash[17774]: audit 2026-03-09T18:35:38.625845+0000 mon.a (mon.0) 916 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:38 vm08 bash[17774]: audit 2026-03-09T18:35:38.627479+0000 mon.a (mon.0) 917 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:35:39.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:38 vm08 bash[17774]: audit 2026-03-09T18:35:38.628157+0000 mon.a (mon.0) 918 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:35:39.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:38 vm08 bash[17774]: audit 2026-03-09T18:35:38.628568+0000 mon.a (mon.0) 919 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:35:39.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:38 vm08 bash[17774]: cephadm 2026-03-09T18:35:38.629217+0000 mgr.y (mgr.24880) 10 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:35:39.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:38 vm08 bash[17774]: cephadm 2026-03-09T18:35:38.629357+0000 mgr.y (mgr.24880) 11 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T18:35:39.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:38 vm08 bash[17774]: cephadm 2026-03-09T18:35:38.661643+0000 mgr.y (mgr.24880) 12 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:35:39.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:38 vm08 bash[17774]: cephadm 2026-03-09T18:35:38.664081+0000 mgr.y (mgr.24880) 13 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:35:39.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:38 vm08 bash[17774]: cephadm 2026-03-09T18:35:38.692804+0000 mgr.y (mgr.24880) 14 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:35:39.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:38 vm08 bash[17774]: cephadm 2026-03-09T18:35:38.695983+0000 mgr.y (mgr.24880) 15 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:35:39.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:38 vm08 bash[17774]: cephadm 2026-03-09T18:35:38.724659+0000 mgr.y (mgr.24880) 16 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:35:39.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:38 vm08 bash[17774]: cephadm 2026-03-09T18:35:38.729548+0000 mgr.y (mgr.24880) 17 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:35:39.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:38 vm08 bash[17774]: audit 2026-03-09T18:35:38.762264+0000 mon.a (mon.0) 920 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:38 vm08 bash[17774]: audit 2026-03-09T18:35:38.767643+0000 mon.a (mon.0) 921 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:38 vm08 bash[17774]: audit 2026-03-09T18:35:38.771611+0000 mon.a (mon.0) 922 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:38 vm08 bash[17774]: audit 2026-03-09T18:35:38.779992+0000 mon.a (mon.0) 923 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:38 vm08 bash[17774]: audit 2026-03-09T18:35:38.785030+0000 mon.a (mon.0) 924 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:39.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:38 vm08 bash[17774]: audit 2026-03-09T18:35:38.799678+0000 mon.a (mon.0) 925 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:35:39.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:38 vm08 bash[17774]: audit 2026-03-09T18:35:38.803239+0000 mon.a (mon.0) 926 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:35:39.974 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:39 vm08 systemd[1]: Stopping Ceph prometheus.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:35:39.974 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:39 vm08 bash[38540]: ts=2026-03-09T18:35:39.939Z caller=main.go:964 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-09T18:35:39.974 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:39 vm08 bash[38540]: ts=2026-03-09T18:35:39.940Z caller=main.go:988 level=info msg="Stopping scrape discovery manager..." 2026-03-09T18:35:39.974 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:39 vm08 bash[38540]: ts=2026-03-09T18:35:39.940Z caller=main.go:1002 level=info msg="Stopping notify discovery manager..." 2026-03-09T18:35:39.974 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:39 vm08 bash[38540]: ts=2026-03-09T18:35:39.940Z caller=manager.go:177 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-09T18:35:39.974 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:39 vm08 bash[38540]: ts=2026-03-09T18:35:39.940Z caller=manager.go:187 level=info component="rule manager" msg="Rule manager stopped" 2026-03-09T18:35:39.974 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:39 vm08 bash[38540]: ts=2026-03-09T18:35:39.940Z caller=main.go:1039 level=info msg="Stopping scrape manager..." 2026-03-09T18:35:39.974 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:39 vm08 bash[38540]: ts=2026-03-09T18:35:39.940Z caller=main.go:984 level=info msg="Scrape discovery manager stopped" 2026-03-09T18:35:39.974 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:39 vm08 bash[38540]: ts=2026-03-09T18:35:39.940Z caller=main.go:998 level=info msg="Notify discovery manager stopped" 2026-03-09T18:35:39.974 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:39 vm08 bash[38540]: ts=2026-03-09T18:35:39.941Z caller=main.go:1031 level=info msg="Scrape manager stopped" 2026-03-09T18:35:39.974 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:39 vm08 bash[38540]: ts=2026-03-09T18:35:39.942Z caller=notifier.go:618 level=info component=notifier msg="Stopping notification manager..." 2026-03-09T18:35:39.974 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:39 vm08 bash[38540]: ts=2026-03-09T18:35:39.942Z caller=main.go:1261 level=info msg="Notifier manager stopped" 2026-03-09T18:35:39.974 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:39 vm08 bash[38540]: ts=2026-03-09T18:35:39.942Z caller=main.go:1273 level=info msg="See you next time!" 2026-03-09T18:35:39.974 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:39 vm08 bash[40665]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-prometheus-a 2026-03-09T18:35:40.251 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:40 vm08 bash[17774]: cephadm 2026-03-09T18:35:38.799381+0000 mgr.y (mgr.24880) 18 : cephadm [INF] Reconfiguring iscsi.foo.vm00.ywhulq (dependencies changed)... 2026-03-09T18:35:40.251 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:40 vm08 bash[17774]: cephadm 2026-03-09T18:35:38.803714+0000 mgr.y (mgr.24880) 19 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm00.ywhulq on vm00 2026-03-09T18:35:40.251 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:40 vm08 bash[17774]: audit 2026-03-09T18:35:39.251342+0000 mon.a (mon.0) 927 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:40.251 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:40 vm08 bash[17774]: audit 2026-03-09T18:35:39.257203+0000 mon.a (mon.0) 928 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:40.251 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:40 vm08 bash[17774]: cephadm 2026-03-09T18:35:39.258301+0000 mgr.y (mgr.24880) 20 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T18:35:40.251 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:40 vm08 bash[17774]: cephadm 2026-03-09T18:35:39.460093+0000 mgr.y (mgr.24880) 21 : cephadm [INF] Reconfiguring daemon prometheus.a on vm08 2026-03-09T18:35:40.251 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:40 vm08 bash[17774]: audit 2026-03-09T18:35:39.752878+0000 mon.a (mon.0) 929 : audit [DBG] from='client.? 192.168.123.100:0/1856685869' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T18:35:40.251 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:40 vm08 bash[17774]: audit 2026-03-09T18:35:40.023477+0000 mon.a (mon.0) 930 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:40.251 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:40 vm08 bash[17774]: audit 2026-03-09T18:35:40.030089+0000 mon.a (mon.0) 931 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:40.251 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:40 vm08 bash[17774]: audit 2026-03-09T18:35:40.033522+0000 mon.a (mon.0) 932 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:35:40.251 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:40 vm08 bash[17774]: audit 2026-03-09T18:35:40.042159+0000 mon.a (mon.0) 933 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:40.251 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:40 vm08 bash[17774]: audit 2026-03-09T18:35:40.043909+0000 mon.a (mon.0) 934 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:35:40.251 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:40 vm08 bash[17774]: audit 2026-03-09T18:35:40.045125+0000 mon.a (mon.0) 935 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:35:40.251 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:40 vm08 bash[17774]: audit 2026-03-09T18:35:40.052590+0000 mon.a (mon.0) 936 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:40.251 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:40 vm08 bash[17774]: audit 2026-03-09T18:35:40.056017+0000 mon.a (mon.0) 937 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:35:40.251 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:40 vm08 bash[17774]: audit 2026-03-09T18:35:40.084726+0000 mon.a (mon.0) 938 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:35:40.251 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:39 vm08 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@prometheus.a.service: Deactivated successfully. 2026-03-09T18:35:40.251 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:39 vm08 systemd[1]: Stopped Ceph prometheus.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:35:40.251 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:39 vm08 systemd[1]: Started Ceph prometheus.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:35:40.251 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:40 vm08 bash[40744]: ts=2026-03-09T18:35:40.132Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-09T18:35:40.251 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:40 vm08 bash[40744]: ts=2026-03-09T18:35:40.132Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-09T18:35:40.251 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:40 vm08 bash[40744]: ts=2026-03-09T18:35:40.132Z caller=main.go:623 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm08 (none))" 2026-03-09T18:35:40.251 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:40 vm08 bash[40744]: ts=2026-03-09T18:35:40.132Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-09T18:35:40.251 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:40 vm08 bash[40744]: ts=2026-03-09T18:35:40.132Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-09T18:35:40.252 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:40 vm08 bash[40744]: ts=2026-03-09T18:35:40.133Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-09T18:35:40.252 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:40 vm08 bash[40744]: ts=2026-03-09T18:35:40.133Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-09T18:35:40.252 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:40 vm08 bash[40744]: ts=2026-03-09T18:35:40.138Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-09T18:35:40.252 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:40 vm08 bash[40744]: ts=2026-03-09T18:35:40.138Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-09T18:35:40.252 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:40 vm08 bash[40744]: ts=2026-03-09T18:35:40.141Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-09T18:35:40.252 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:40 vm08 bash[40744]: ts=2026-03-09T18:35:40.141Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.372µs 2026-03-09T18:35:40.252 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:40 vm08 bash[40744]: ts=2026-03-09T18:35:40.141Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-09T18:35:40.252 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:40 vm08 bash[40744]: ts=2026-03-09T18:35:40.146Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=3 2026-03-09T18:35:40.252 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:40 vm08 bash[40744]: ts=2026-03-09T18:35:40.166Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=3 2026-03-09T18:35:40.252 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:40 vm08 bash[40744]: ts=2026-03-09T18:35:40.180Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=2 maxSegment=3 2026-03-09T18:35:40.252 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:40 vm08 bash[40744]: ts=2026-03-09T18:35:40.180Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=3 maxSegment=3 2026-03-09T18:35:40.252 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:40 vm08 bash[40744]: ts=2026-03-09T18:35:40.180Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=174.727µs wal_replay_duration=39.553708ms wbl_replay_duration=131ns total_replay_duration=39.952345ms 2026-03-09T18:35:40.252 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:40 vm08 bash[40744]: ts=2026-03-09T18:35:40.183Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-09T18:35:40.252 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:40 vm08 bash[40744]: ts=2026-03-09T18:35:40.183Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-09T18:35:40.252 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:40 vm08 bash[40744]: ts=2026-03-09T18:35:40.183Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-09T18:35:40.252 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:40 vm08 bash[40744]: ts=2026-03-09T18:35:40.204Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=21.115173ms db_storage=722ns remote_storage=1.213µs web_handler=501ns query_engine=490ns scrape=13.15676ms scrape_sd=98.685µs notify=7.414µs notify_sd=5.279µs rules=7.526333ms tracing=3.938µs 2026-03-09T18:35:40.252 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:40 vm08 bash[40744]: ts=2026-03-09T18:35:40.204Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-09T18:35:40.252 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:35:40 vm08 bash[40744]: ts=2026-03-09T18:35:40.204Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:40 vm00 bash[17468]: cephadm 2026-03-09T18:35:38.799381+0000 mgr.y (mgr.24880) 18 : cephadm [INF] Reconfiguring iscsi.foo.vm00.ywhulq (dependencies changed)... 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:40 vm00 bash[17468]: cephadm 2026-03-09T18:35:38.803714+0000 mgr.y (mgr.24880) 19 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm00.ywhulq on vm00 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:40 vm00 bash[17468]: audit 2026-03-09T18:35:39.251342+0000 mon.a (mon.0) 927 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:40 vm00 bash[17468]: audit 2026-03-09T18:35:39.257203+0000 mon.a (mon.0) 928 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:40 vm00 bash[17468]: cephadm 2026-03-09T18:35:39.258301+0000 mgr.y (mgr.24880) 20 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:40 vm00 bash[17468]: cephadm 2026-03-09T18:35:39.460093+0000 mgr.y (mgr.24880) 21 : cephadm [INF] Reconfiguring daemon prometheus.a on vm08 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:40 vm00 bash[17468]: audit 2026-03-09T18:35:39.752878+0000 mon.a (mon.0) 929 : audit [DBG] from='client.? 192.168.123.100:0/1856685869' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:40 vm00 bash[17468]: audit 2026-03-09T18:35:40.023477+0000 mon.a (mon.0) 930 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:40 vm00 bash[17468]: audit 2026-03-09T18:35:40.030089+0000 mon.a (mon.0) 931 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:40 vm00 bash[17468]: audit 2026-03-09T18:35:40.033522+0000 mon.a (mon.0) 932 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:40 vm00 bash[17468]: audit 2026-03-09T18:35:40.042159+0000 mon.a (mon.0) 933 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:40 vm00 bash[17468]: audit 2026-03-09T18:35:40.043909+0000 mon.a (mon.0) 934 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:40 vm00 bash[17468]: audit 2026-03-09T18:35:40.045125+0000 mon.a (mon.0) 935 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:40 vm00 bash[17468]: audit 2026-03-09T18:35:40.052590+0000 mon.a (mon.0) 936 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:40 vm00 bash[17468]: audit 2026-03-09T18:35:40.056017+0000 mon.a (mon.0) 937 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:40 vm00 bash[17468]: audit 2026-03-09T18:35:40.084726+0000 mon.a (mon.0) 938 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:40 vm00 bash[22468]: cephadm 2026-03-09T18:35:38.799381+0000 mgr.y (mgr.24880) 18 : cephadm [INF] Reconfiguring iscsi.foo.vm00.ywhulq (dependencies changed)... 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:40 vm00 bash[22468]: cephadm 2026-03-09T18:35:38.803714+0000 mgr.y (mgr.24880) 19 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm00.ywhulq on vm00 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:40 vm00 bash[22468]: audit 2026-03-09T18:35:39.251342+0000 mon.a (mon.0) 927 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:40 vm00 bash[22468]: audit 2026-03-09T18:35:39.257203+0000 mon.a (mon.0) 928 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:40 vm00 bash[22468]: cephadm 2026-03-09T18:35:39.258301+0000 mgr.y (mgr.24880) 20 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:40 vm00 bash[22468]: cephadm 2026-03-09T18:35:39.460093+0000 mgr.y (mgr.24880) 21 : cephadm [INF] Reconfiguring daemon prometheus.a on vm08 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:40 vm00 bash[22468]: audit 2026-03-09T18:35:39.752878+0000 mon.a (mon.0) 929 : audit [DBG] from='client.? 192.168.123.100:0/1856685869' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:40 vm00 bash[22468]: audit 2026-03-09T18:35:40.023477+0000 mon.a (mon.0) 930 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:40 vm00 bash[22468]: audit 2026-03-09T18:35:40.030089+0000 mon.a (mon.0) 931 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:40 vm00 bash[22468]: audit 2026-03-09T18:35:40.033522+0000 mon.a (mon.0) 932 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:40 vm00 bash[22468]: audit 2026-03-09T18:35:40.042159+0000 mon.a (mon.0) 933 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:40 vm00 bash[22468]: audit 2026-03-09T18:35:40.043909+0000 mon.a (mon.0) 934 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:40 vm00 bash[22468]: audit 2026-03-09T18:35:40.045125+0000 mon.a (mon.0) 935 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:40 vm00 bash[22468]: audit 2026-03-09T18:35:40.052590+0000 mon.a (mon.0) 936 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:40 vm00 bash[22468]: audit 2026-03-09T18:35:40.056017+0000 mon.a (mon.0) 937 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:35:40.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:40 vm00 bash[22468]: audit 2026-03-09T18:35:40.084726+0000 mon.a (mon.0) 938 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:35:41.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:41 vm00 bash[17468]: cluster 2026-03-09T18:35:39.847158+0000 mgr.y (mgr.24880) 22 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T18:35:41.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:41 vm00 bash[17468]: audit 2026-03-09T18:35:40.033947+0000 mgr.y (mgr.24880) 23 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:35:41.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:41 vm00 bash[17468]: cephadm 2026-03-09T18:35:40.043207+0000 mgr.y (mgr.24880) 24 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.100:5000 to Dashboard 2026-03-09T18:35:41.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:41 vm00 bash[17468]: audit 2026-03-09T18:35:40.044201+0000 mgr.y (mgr.24880) 25 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:35:41.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:41 vm00 bash[17468]: audit 2026-03-09T18:35:40.045374+0000 mgr.y (mgr.24880) 26 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:35:41.627 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:41 vm00 bash[17468]: audit 2026-03-09T18:35:40.056287+0000 mgr.y (mgr.24880) 27 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:35:41.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:41 vm00 bash[22468]: cluster 2026-03-09T18:35:39.847158+0000 mgr.y (mgr.24880) 22 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T18:35:41.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:41 vm00 bash[22468]: audit 2026-03-09T18:35:40.033947+0000 mgr.y (mgr.24880) 23 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:35:41.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:41 vm00 bash[22468]: cephadm 2026-03-09T18:35:40.043207+0000 mgr.y (mgr.24880) 24 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.100:5000 to Dashboard 2026-03-09T18:35:41.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:41 vm00 bash[22468]: audit 2026-03-09T18:35:40.044201+0000 mgr.y (mgr.24880) 25 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:35:41.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:41 vm00 bash[22468]: audit 2026-03-09T18:35:40.045374+0000 mgr.y (mgr.24880) 26 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:35:41.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:41 vm00 bash[22468]: audit 2026-03-09T18:35:40.056287+0000 mgr.y (mgr.24880) 27 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:35:41.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:41 vm08 bash[17774]: cluster 2026-03-09T18:35:39.847158+0000 mgr.y (mgr.24880) 22 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T18:35:41.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:41 vm08 bash[17774]: audit 2026-03-09T18:35:40.033947+0000 mgr.y (mgr.24880) 23 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:35:41.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:41 vm08 bash[17774]: cephadm 2026-03-09T18:35:40.043207+0000 mgr.y (mgr.24880) 24 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.100:5000 to Dashboard 2026-03-09T18:35:41.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:41 vm08 bash[17774]: audit 2026-03-09T18:35:40.044201+0000 mgr.y (mgr.24880) 25 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:35:41.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:41 vm08 bash[17774]: audit 2026-03-09T18:35:40.045374+0000 mgr.y (mgr.24880) 26 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:35:41.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:41 vm08 bash[17774]: audit 2026-03-09T18:35:40.056287+0000 mgr.y (mgr.24880) 27 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:35:43.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:43 vm00 bash[17468]: cluster 2026-03-09T18:35:41.847437+0000 mgr.y (mgr.24880) 28 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T18:35:43.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:43 vm00 bash[22468]: cluster 2026-03-09T18:35:41.847437+0000 mgr.y (mgr.24880) 28 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T18:35:43.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:43 vm08 bash[17774]: cluster 2026-03-09T18:35:41.847437+0000 mgr.y (mgr.24880) 28 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T18:35:45.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:45 vm00 bash[22468]: cluster 2026-03-09T18:35:43.847913+0000 mgr.y (mgr.24880) 29 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:35:45.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:45 vm00 bash[22468]: audit 2026-03-09T18:35:45.365598+0000 mon.a (mon.0) 939 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:45.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:45 vm00 bash[22468]: audit 2026-03-09T18:35:45.372866+0000 mon.a (mon.0) 940 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:45.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:45 vm00 bash[22468]: audit 2026-03-09T18:35:45.455449+0000 mon.a (mon.0) 941 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:45.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:45 vm00 bash[22468]: audit 2026-03-09T18:35:45.460661+0000 mon.a (mon.0) 942 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:45.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:45 vm00 bash[22468]: audit 2026-03-09T18:35:45.461399+0000 mon.a (mon.0) 943 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:35:45.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:45 vm00 bash[22468]: audit 2026-03-09T18:35:45.461900+0000 mon.a (mon.0) 944 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:35:45.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:45 vm00 bash[22468]: audit 2026-03-09T18:35:45.465971+0000 mon.a (mon.0) 945 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:45.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:45 vm00 bash[17468]: cluster 2026-03-09T18:35:43.847913+0000 mgr.y (mgr.24880) 29 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:35:45.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:45 vm00 bash[17468]: audit 2026-03-09T18:35:45.365598+0000 mon.a (mon.0) 939 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:45.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:45 vm00 bash[17468]: audit 2026-03-09T18:35:45.372866+0000 mon.a (mon.0) 940 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:45.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:45 vm00 bash[17468]: audit 2026-03-09T18:35:45.455449+0000 mon.a (mon.0) 941 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:45.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:45 vm00 bash[17468]: audit 2026-03-09T18:35:45.460661+0000 mon.a (mon.0) 942 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:45.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:45 vm00 bash[17468]: audit 2026-03-09T18:35:45.461399+0000 mon.a (mon.0) 943 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:35:45.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:45 vm00 bash[17468]: audit 2026-03-09T18:35:45.461900+0000 mon.a (mon.0) 944 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:35:45.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:45 vm00 bash[17468]: audit 2026-03-09T18:35:45.465971+0000 mon.a (mon.0) 945 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:45.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:45 vm08 bash[17774]: cluster 2026-03-09T18:35:43.847913+0000 mgr.y (mgr.24880) 29 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:35:45.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:45 vm08 bash[17774]: audit 2026-03-09T18:35:45.365598+0000 mon.a (mon.0) 939 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:45.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:45 vm08 bash[17774]: audit 2026-03-09T18:35:45.372866+0000 mon.a (mon.0) 940 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:45.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:45 vm08 bash[17774]: audit 2026-03-09T18:35:45.455449+0000 mon.a (mon.0) 941 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:45.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:45 vm08 bash[17774]: audit 2026-03-09T18:35:45.460661+0000 mon.a (mon.0) 942 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:45.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:45 vm08 bash[17774]: audit 2026-03-09T18:35:45.461399+0000 mon.a (mon.0) 943 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:35:45.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:45 vm08 bash[17774]: audit 2026-03-09T18:35:45.461900+0000 mon.a (mon.0) 944 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:35:45.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:45 vm08 bash[17774]: audit 2026-03-09T18:35:45.465971+0000 mon.a (mon.0) 945 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:35:47.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:47 vm00 bash[22468]: cluster 2026-03-09T18:35:45.848242+0000 mgr.y (mgr.24880) 30 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:35:47.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:47 vm00 bash[22468]: audit 2026-03-09T18:35:47.183727+0000 mon.a (mon.0) 946 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:35:47.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:47 vm00 bash[17468]: cluster 2026-03-09T18:35:45.848242+0000 mgr.y (mgr.24880) 30 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:35:47.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:47 vm00 bash[17468]: audit 2026-03-09T18:35:47.183727+0000 mon.a (mon.0) 946 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:35:47.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:47 vm08 bash[17774]: cluster 2026-03-09T18:35:45.848242+0000 mgr.y (mgr.24880) 30 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:35:47.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:47 vm08 bash[17774]: audit 2026-03-09T18:35:47.183727+0000 mon.a (mon.0) 946 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:35:49.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:49 vm00 bash[17468]: cluster 2026-03-09T18:35:47.848564+0000 mgr.y (mgr.24880) 31 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:35:49.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:49 vm00 bash[22468]: cluster 2026-03-09T18:35:47.848564+0000 mgr.y (mgr.24880) 31 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:35:49.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:35:49 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:35:49] "GET /metrics HTTP/1.1" 200 37545 "" "Prometheus/2.51.0" 2026-03-09T18:35:49.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:49 vm08 bash[17774]: cluster 2026-03-09T18:35:47.848564+0000 mgr.y (mgr.24880) 31 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:35:50.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:50 vm00 bash[17468]: audit 2026-03-09T18:35:49.595870+0000 mgr.y (mgr.24880) 32 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:50.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:50 vm00 bash[22468]: audit 2026-03-09T18:35:49.595870+0000 mgr.y (mgr.24880) 32 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:50.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:50 vm08 bash[17774]: audit 2026-03-09T18:35:49.595870+0000 mgr.y (mgr.24880) 32 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:35:51.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:51 vm00 bash[22468]: cluster 2026-03-09T18:35:49.849064+0000 mgr.y (mgr.24880) 33 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:35:51.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:51 vm00 bash[17468]: cluster 2026-03-09T18:35:49.849064+0000 mgr.y (mgr.24880) 33 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:35:51.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:51 vm08 bash[17774]: cluster 2026-03-09T18:35:49.849064+0000 mgr.y (mgr.24880) 33 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:35:53.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:53 vm00 bash[22468]: cluster 2026-03-09T18:35:51.849343+0000 mgr.y (mgr.24880) 34 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 1 op/s 2026-03-09T18:35:53.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:53 vm00 bash[17468]: cluster 2026-03-09T18:35:51.849343+0000 mgr.y (mgr.24880) 34 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 1 op/s 2026-03-09T18:35:53.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:53 vm08 bash[17774]: cluster 2026-03-09T18:35:51.849343+0000 mgr.y (mgr.24880) 34 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 1 op/s 2026-03-09T18:35:55.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:55 vm00 bash[22468]: cluster 2026-03-09T18:35:53.849847+0000 mgr.y (mgr.24880) 35 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:35:55.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:55 vm00 bash[17468]: cluster 2026-03-09T18:35:53.849847+0000 mgr.y (mgr.24880) 35 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:35:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:55 vm08 bash[17774]: cluster 2026-03-09T18:35:53.849847+0000 mgr.y (mgr.24880) 35 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:35:57.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:57 vm00 bash[17468]: cluster 2026-03-09T18:35:55.850124+0000 mgr.y (mgr.24880) 36 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:57.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:57 vm00 bash[22468]: cluster 2026-03-09T18:35:55.850124+0000 mgr.y (mgr.24880) 36 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:57.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:57 vm08 bash[17774]: cluster 2026-03-09T18:35:55.850124+0000 mgr.y (mgr.24880) 36 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:59.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:35:59 vm00 bash[17468]: cluster 2026-03-09T18:35:57.850391+0000 mgr.y (mgr.24880) 37 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:59.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:35:59 vm00 bash[22468]: cluster 2026-03-09T18:35:57.850391+0000 mgr.y (mgr.24880) 37 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:35:59.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:35:59 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:35:59] "GET /metrics HTTP/1.1" 200 37545 "" "Prometheus/2.51.0" 2026-03-09T18:35:59.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:35:59 vm08 bash[17774]: cluster 2026-03-09T18:35:57.850391+0000 mgr.y (mgr.24880) 37 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:00.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:00 vm00 bash[17468]: audit 2026-03-09T18:35:59.601599+0000 mgr.y (mgr.24880) 38 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:36:00.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:00 vm00 bash[22468]: audit 2026-03-09T18:35:59.601599+0000 mgr.y (mgr.24880) 38 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:36:00.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:00 vm08 bash[17774]: audit 2026-03-09T18:35:59.601599+0000 mgr.y (mgr.24880) 38 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:36:01.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:01 vm00 bash[22468]: cluster 2026-03-09T18:35:59.850930+0000 mgr.y (mgr.24880) 39 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:01.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:01 vm00 bash[17468]: cluster 2026-03-09T18:35:59.850930+0000 mgr.y (mgr.24880) 39 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:01.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:01 vm08 bash[17774]: cluster 2026-03-09T18:35:59.850930+0000 mgr.y (mgr.24880) 39 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:02.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:02 vm00 bash[17468]: audit 2026-03-09T18:36:02.183795+0000 mon.a (mon.0) 947 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:36:02.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:02 vm00 bash[22468]: audit 2026-03-09T18:36:02.183795+0000 mon.a (mon.0) 947 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:36:02.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:02 vm08 bash[17774]: audit 2026-03-09T18:36:02.183795+0000 mon.a (mon.0) 947 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:36:03.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:03 vm00 bash[17468]: cluster 2026-03-09T18:36:01.851182+0000 mgr.y (mgr.24880) 40 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:03.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:03 vm00 bash[22468]: cluster 2026-03-09T18:36:01.851182+0000 mgr.y (mgr.24880) 40 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:03.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:03 vm08 bash[17774]: cluster 2026-03-09T18:36:01.851182+0000 mgr.y (mgr.24880) 40 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:05.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:05 vm00 bash[17468]: cluster 2026-03-09T18:36:03.851719+0000 mgr.y (mgr.24880) 41 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:05.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:05 vm00 bash[22468]: cluster 2026-03-09T18:36:03.851719+0000 mgr.y (mgr.24880) 41 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:05.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:05 vm08 bash[17774]: cluster 2026-03-09T18:36:03.851719+0000 mgr.y (mgr.24880) 41 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:07.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:07 vm00 bash[17468]: cluster 2026-03-09T18:36:05.852042+0000 mgr.y (mgr.24880) 42 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:07.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:07 vm00 bash[22468]: cluster 2026-03-09T18:36:05.852042+0000 mgr.y (mgr.24880) 42 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:07.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:07 vm08 bash[17774]: cluster 2026-03-09T18:36:05.852042+0000 mgr.y (mgr.24880) 42 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:09.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:09 vm00 bash[22468]: cluster 2026-03-09T18:36:07.852431+0000 mgr.y (mgr.24880) 43 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:09.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:09 vm00 bash[17468]: cluster 2026-03-09T18:36:07.852431+0000 mgr.y (mgr.24880) 43 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:09.877 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:36:09 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:36:09] "GET /metrics HTTP/1.1" 200 37541 "" "Prometheus/2.51.0" 2026-03-09T18:36:09.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:09 vm08 bash[17774]: cluster 2026-03-09T18:36:07.852431+0000 mgr.y (mgr.24880) 43 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:10.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:10 vm00 bash[17468]: audit 2026-03-09T18:36:09.609472+0000 mgr.y (mgr.24880) 44 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:36:10.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:10 vm00 bash[22468]: audit 2026-03-09T18:36:09.609472+0000 mgr.y (mgr.24880) 44 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:36:10.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:10 vm08 bash[17774]: audit 2026-03-09T18:36:09.609472+0000 mgr.y (mgr.24880) 44 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:36:11.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:11 vm00 bash[22468]: cluster 2026-03-09T18:36:09.852904+0000 mgr.y (mgr.24880) 45 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:11.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:11 vm00 bash[17468]: cluster 2026-03-09T18:36:09.852904+0000 mgr.y (mgr.24880) 45 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:11.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:11 vm08 bash[17774]: cluster 2026-03-09T18:36:09.852904+0000 mgr.y (mgr.24880) 45 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:13.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:13 vm00 bash[22468]: cluster 2026-03-09T18:36:11.853157+0000 mgr.y (mgr.24880) 46 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:13.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:13 vm00 bash[17468]: cluster 2026-03-09T18:36:11.853157+0000 mgr.y (mgr.24880) 46 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:13.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:13 vm08 bash[17774]: cluster 2026-03-09T18:36:11.853157+0000 mgr.y (mgr.24880) 46 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:15.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:15 vm00 bash[22468]: cluster 2026-03-09T18:36:13.853626+0000 mgr.y (mgr.24880) 47 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:15.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:15 vm00 bash[17468]: cluster 2026-03-09T18:36:13.853626+0000 mgr.y (mgr.24880) 47 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:15.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:15 vm08 bash[17774]: cluster 2026-03-09T18:36:13.853626+0000 mgr.y (mgr.24880) 47 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:17.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:17 vm00 bash[22468]: cluster 2026-03-09T18:36:15.853893+0000 mgr.y (mgr.24880) 48 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:17.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:17 vm00 bash[22468]: audit 2026-03-09T18:36:17.184213+0000 mon.a (mon.0) 948 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:36:17.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:17 vm00 bash[17468]: cluster 2026-03-09T18:36:15.853893+0000 mgr.y (mgr.24880) 48 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:17.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:17 vm00 bash[17468]: audit 2026-03-09T18:36:17.184213+0000 mon.a (mon.0) 948 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:36:17.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:17 vm08 bash[17774]: cluster 2026-03-09T18:36:15.853893+0000 mgr.y (mgr.24880) 48 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:17.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:17 vm08 bash[17774]: audit 2026-03-09T18:36:17.184213+0000 mon.a (mon.0) 948 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:36:19.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:19 vm00 bash[22468]: cluster 2026-03-09T18:36:17.854182+0000 mgr.y (mgr.24880) 49 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:19.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:19 vm00 bash[17468]: cluster 2026-03-09T18:36:17.854182+0000 mgr.y (mgr.24880) 49 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:19.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:36:19 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:36:19] "GET /metrics HTTP/1.1" 200 37540 "" "Prometheus/2.51.0" 2026-03-09T18:36:19.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:19 vm08 bash[17774]: cluster 2026-03-09T18:36:17.854182+0000 mgr.y (mgr.24880) 49 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:20.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:20 vm08 bash[17774]: audit 2026-03-09T18:36:19.619471+0000 mgr.y (mgr.24880) 50 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:36:21.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:20 vm00 bash[22468]: audit 2026-03-09T18:36:19.619471+0000 mgr.y (mgr.24880) 50 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:36:21.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:20 vm00 bash[17468]: audit 2026-03-09T18:36:19.619471+0000 mgr.y (mgr.24880) 50 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:36:21.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:21 vm08 bash[17774]: cluster 2026-03-09T18:36:19.854605+0000 mgr.y (mgr.24880) 51 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:22.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:21 vm00 bash[22468]: cluster 2026-03-09T18:36:19.854605+0000 mgr.y (mgr.24880) 51 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:22.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:21 vm00 bash[17468]: cluster 2026-03-09T18:36:19.854605+0000 mgr.y (mgr.24880) 51 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:23.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:23 vm08 bash[17774]: cluster 2026-03-09T18:36:21.854912+0000 mgr.y (mgr.24880) 52 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:24.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:23 vm00 bash[22468]: cluster 2026-03-09T18:36:21.854912+0000 mgr.y (mgr.24880) 52 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:24.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:23 vm00 bash[17468]: cluster 2026-03-09T18:36:21.854912+0000 mgr.y (mgr.24880) 52 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:25.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:25 vm08 bash[17774]: cluster 2026-03-09T18:36:23.855508+0000 mgr.y (mgr.24880) 53 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:26.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:25 vm00 bash[22468]: cluster 2026-03-09T18:36:23.855508+0000 mgr.y (mgr.24880) 53 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:26.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:25 vm00 bash[17468]: cluster 2026-03-09T18:36:23.855508+0000 mgr.y (mgr.24880) 53 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:27.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:27 vm08 bash[17774]: cluster 2026-03-09T18:36:25.855813+0000 mgr.y (mgr.24880) 54 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:28.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:27 vm00 bash[22468]: cluster 2026-03-09T18:36:25.855813+0000 mgr.y (mgr.24880) 54 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:28.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:27 vm00 bash[17468]: cluster 2026-03-09T18:36:25.855813+0000 mgr.y (mgr.24880) 54 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:29.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:29 vm00 bash[22468]: cluster 2026-03-09T18:36:27.856152+0000 mgr.y (mgr.24880) 55 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:29.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:29 vm00 bash[17468]: cluster 2026-03-09T18:36:27.856152+0000 mgr.y (mgr.24880) 55 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:29.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:36:29 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:36:29] "GET /metrics HTTP/1.1" 200 37540 "" "Prometheus/2.51.0" 2026-03-09T18:36:29.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:29 vm08 bash[17774]: cluster 2026-03-09T18:36:27.856152+0000 mgr.y (mgr.24880) 55 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:30.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:30 vm08 bash[17774]: audit 2026-03-09T18:36:29.630087+0000 mgr.y (mgr.24880) 56 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:36:31.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:30 vm00 bash[22468]: audit 2026-03-09T18:36:29.630087+0000 mgr.y (mgr.24880) 56 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:36:31.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:30 vm00 bash[17468]: audit 2026-03-09T18:36:29.630087+0000 mgr.y (mgr.24880) 56 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:36:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:31 vm08 bash[17774]: cluster 2026-03-09T18:36:29.856724+0000 mgr.y (mgr.24880) 57 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:32.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:31 vm00 bash[22468]: cluster 2026-03-09T18:36:29.856724+0000 mgr.y (mgr.24880) 57 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:32.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:31 vm00 bash[17468]: cluster 2026-03-09T18:36:29.856724+0000 mgr.y (mgr.24880) 57 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:32.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:32 vm08 bash[17774]: audit 2026-03-09T18:36:32.184387+0000 mon.a (mon.0) 949 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:36:33.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:32 vm00 bash[22468]: audit 2026-03-09T18:36:32.184387+0000 mon.a (mon.0) 949 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:36:33.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:32 vm00 bash[17468]: audit 2026-03-09T18:36:32.184387+0000 mon.a (mon.0) 949 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:36:33.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:33 vm08 bash[17774]: cluster 2026-03-09T18:36:31.857001+0000 mgr.y (mgr.24880) 58 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:34.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:33 vm00 bash[22468]: cluster 2026-03-09T18:36:31.857001+0000 mgr.y (mgr.24880) 58 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:34.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:33 vm00 bash[17468]: cluster 2026-03-09T18:36:31.857001+0000 mgr.y (mgr.24880) 58 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:35.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:35 vm08 bash[17774]: cluster 2026-03-09T18:36:33.857524+0000 mgr.y (mgr.24880) 59 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:36.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:35 vm00 bash[22468]: cluster 2026-03-09T18:36:33.857524+0000 mgr.y (mgr.24880) 59 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:36.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:35 vm00 bash[17468]: cluster 2026-03-09T18:36:33.857524+0000 mgr.y (mgr.24880) 59 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:37.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:37 vm08 bash[17774]: cluster 2026-03-09T18:36:35.857822+0000 mgr.y (mgr.24880) 60 : cluster [DBG] pgmap v35: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:38.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:37 vm00 bash[22468]: cluster 2026-03-09T18:36:35.857822+0000 mgr.y (mgr.24880) 60 : cluster [DBG] pgmap v35: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:38.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:37 vm00 bash[17468]: cluster 2026-03-09T18:36:35.857822+0000 mgr.y (mgr.24880) 60 : cluster [DBG] pgmap v35: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:39.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:39 vm00 bash[22468]: cluster 2026-03-09T18:36:37.858118+0000 mgr.y (mgr.24880) 61 : cluster [DBG] pgmap v36: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:39.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:39 vm00 bash[17468]: cluster 2026-03-09T18:36:37.858118+0000 mgr.y (mgr.24880) 61 : cluster [DBG] pgmap v36: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:39.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:36:39 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:36:39] "GET /metrics HTTP/1.1" 200 37540 "" "Prometheus/2.51.0" 2026-03-09T18:36:39.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:39 vm08 bash[17774]: cluster 2026-03-09T18:36:37.858118+0000 mgr.y (mgr.24880) 61 : cluster [DBG] pgmap v36: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:40.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:40 vm08 bash[17774]: audit 2026-03-09T18:36:39.632940+0000 mgr.y (mgr.24880) 62 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:36:41.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:40 vm00 bash[22468]: audit 2026-03-09T18:36:39.632940+0000 mgr.y (mgr.24880) 62 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:36:41.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:40 vm00 bash[17468]: audit 2026-03-09T18:36:39.632940+0000 mgr.y (mgr.24880) 62 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:36:41.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:41 vm08 bash[17774]: cluster 2026-03-09T18:36:39.858615+0000 mgr.y (mgr.24880) 63 : cluster [DBG] pgmap v37: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:42.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:41 vm00 bash[22468]: cluster 2026-03-09T18:36:39.858615+0000 mgr.y (mgr.24880) 63 : cluster [DBG] pgmap v37: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:42.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:41 vm00 bash[17468]: cluster 2026-03-09T18:36:39.858615+0000 mgr.y (mgr.24880) 63 : cluster [DBG] pgmap v37: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:43.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:43 vm08 bash[17774]: cluster 2026-03-09T18:36:41.858926+0000 mgr.y (mgr.24880) 64 : cluster [DBG] pgmap v38: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:44.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:43 vm00 bash[22468]: cluster 2026-03-09T18:36:41.858926+0000 mgr.y (mgr.24880) 64 : cluster [DBG] pgmap v38: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:44.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:43 vm00 bash[17468]: cluster 2026-03-09T18:36:41.858926+0000 mgr.y (mgr.24880) 64 : cluster [DBG] pgmap v38: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:46.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:45 vm00 bash[22468]: cluster 2026-03-09T18:36:43.859432+0000 mgr.y (mgr.24880) 65 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:46.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:45 vm00 bash[22468]: audit 2026-03-09T18:36:45.509030+0000 mon.a (mon.0) 950 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:36:46.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:45 vm00 bash[17468]: cluster 2026-03-09T18:36:43.859432+0000 mgr.y (mgr.24880) 65 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:46.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:45 vm00 bash[17468]: audit 2026-03-09T18:36:45.509030+0000 mon.a (mon.0) 950 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:36:46.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:45 vm08 bash[17774]: cluster 2026-03-09T18:36:43.859432+0000 mgr.y (mgr.24880) 65 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:46.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:45 vm08 bash[17774]: audit 2026-03-09T18:36:45.509030+0000 mon.a (mon.0) 950 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:36:47.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:46 vm00 bash[22468]: audit 2026-03-09T18:36:45.832212+0000 mon.a (mon.0) 951 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:36:47.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:46 vm00 bash[22468]: audit 2026-03-09T18:36:45.832840+0000 mon.a (mon.0) 952 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:36:47.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:46 vm00 bash[22468]: audit 2026-03-09T18:36:45.838029+0000 mon.a (mon.0) 953 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:36:47.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:46 vm00 bash[17468]: audit 2026-03-09T18:36:45.832212+0000 mon.a (mon.0) 951 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:36:47.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:46 vm00 bash[17468]: audit 2026-03-09T18:36:45.832840+0000 mon.a (mon.0) 952 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:36:47.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:46 vm00 bash[17468]: audit 2026-03-09T18:36:45.838029+0000 mon.a (mon.0) 953 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:36:47.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:46 vm08 bash[17774]: audit 2026-03-09T18:36:45.832212+0000 mon.a (mon.0) 951 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:36:47.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:46 vm08 bash[17774]: audit 2026-03-09T18:36:45.832840+0000 mon.a (mon.0) 952 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:36:47.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:46 vm08 bash[17774]: audit 2026-03-09T18:36:45.838029+0000 mon.a (mon.0) 953 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:36:48.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:47 vm00 bash[22468]: cluster 2026-03-09T18:36:45.859671+0000 mgr.y (mgr.24880) 66 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:48.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:47 vm00 bash[22468]: audit 2026-03-09T18:36:47.184423+0000 mon.a (mon.0) 954 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:36:48.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:47 vm00 bash[17468]: cluster 2026-03-09T18:36:45.859671+0000 mgr.y (mgr.24880) 66 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:48.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:47 vm00 bash[17468]: audit 2026-03-09T18:36:47.184423+0000 mon.a (mon.0) 954 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:36:48.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:47 vm08 bash[17774]: cluster 2026-03-09T18:36:45.859671+0000 mgr.y (mgr.24880) 66 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:48.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:47 vm08 bash[17774]: audit 2026-03-09T18:36:47.184423+0000 mon.a (mon.0) 954 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:36:49.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:48 vm00 bash[22468]: cluster 2026-03-09T18:36:47.859965+0000 mgr.y (mgr.24880) 67 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:49.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:48 vm00 bash[17468]: cluster 2026-03-09T18:36:47.859965+0000 mgr.y (mgr.24880) 67 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:49.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:48 vm08 bash[17774]: cluster 2026-03-09T18:36:47.859965+0000 mgr.y (mgr.24880) 67 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:49.820 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:36:49 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:36:49] "GET /metrics HTTP/1.1" 200 37539 "" "Prometheus/2.51.0" 2026-03-09T18:36:50.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:49 vm00 bash[22468]: audit 2026-03-09T18:36:49.643497+0000 mgr.y (mgr.24880) 68 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:36:50.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:49 vm00 bash[17468]: audit 2026-03-09T18:36:49.643497+0000 mgr.y (mgr.24880) 68 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:36:50.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:49 vm08 bash[17774]: audit 2026-03-09T18:36:49.643497+0000 mgr.y (mgr.24880) 68 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:36:51.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:50 vm00 bash[22468]: cluster 2026-03-09T18:36:49.860527+0000 mgr.y (mgr.24880) 69 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:51.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:50 vm00 bash[17468]: cluster 2026-03-09T18:36:49.860527+0000 mgr.y (mgr.24880) 69 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:50 vm08 bash[17774]: cluster 2026-03-09T18:36:49.860527+0000 mgr.y (mgr.24880) 69 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:53.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:53 vm00 bash[22468]: cluster 2026-03-09T18:36:51.860841+0000 mgr.y (mgr.24880) 70 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:53.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:53 vm00 bash[17468]: cluster 2026-03-09T18:36:51.860841+0000 mgr.y (mgr.24880) 70 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:53.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:53 vm08 bash[17774]: cluster 2026-03-09T18:36:51.860841+0000 mgr.y (mgr.24880) 70 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:55.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:55 vm00 bash[22468]: cluster 2026-03-09T18:36:53.861368+0000 mgr.y (mgr.24880) 71 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:55.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:55 vm00 bash[17468]: cluster 2026-03-09T18:36:53.861368+0000 mgr.y (mgr.24880) 71 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:55 vm08 bash[17774]: cluster 2026-03-09T18:36:53.861368+0000 mgr.y (mgr.24880) 71 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:36:57.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:57 vm00 bash[17468]: cluster 2026-03-09T18:36:55.861710+0000 mgr.y (mgr.24880) 72 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:57.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:57 vm00 bash[22468]: cluster 2026-03-09T18:36:55.861710+0000 mgr.y (mgr.24880) 72 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:57.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:57 vm08 bash[17774]: cluster 2026-03-09T18:36:55.861710+0000 mgr.y (mgr.24880) 72 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:59.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:36:59 vm00 bash[22468]: cluster 2026-03-09T18:36:57.861977+0000 mgr.y (mgr.24880) 73 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:59.877 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:36:59 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:36:59] "GET /metrics HTTP/1.1" 200 37539 "" "Prometheus/2.51.0" 2026-03-09T18:36:59.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:36:59 vm00 bash[17468]: cluster 2026-03-09T18:36:57.861977+0000 mgr.y (mgr.24880) 73 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:36:59.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:36:59 vm08 bash[17774]: cluster 2026-03-09T18:36:57.861977+0000 mgr.y (mgr.24880) 73 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:00.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:00 vm00 bash[17468]: audit 2026-03-09T18:36:59.653578+0000 mgr.y (mgr.24880) 74 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:37:00.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:00 vm00 bash[22468]: audit 2026-03-09T18:36:59.653578+0000 mgr.y (mgr.24880) 74 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:37:00.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:00 vm08 bash[17774]: audit 2026-03-09T18:36:59.653578+0000 mgr.y (mgr.24880) 74 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:37:01.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:01 vm00 bash[17468]: cluster 2026-03-09T18:36:59.862466+0000 mgr.y (mgr.24880) 75 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:01.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:01 vm00 bash[22468]: cluster 2026-03-09T18:36:59.862466+0000 mgr.y (mgr.24880) 75 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:01.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:01 vm08 bash[17774]: cluster 2026-03-09T18:36:59.862466+0000 mgr.y (mgr.24880) 75 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:02.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:02 vm00 bash[22468]: audit 2026-03-09T18:37:02.184569+0000 mon.a (mon.0) 955 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:37:02.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:02 vm00 bash[17468]: audit 2026-03-09T18:37:02.184569+0000 mon.a (mon.0) 955 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:37:02.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:02 vm08 bash[17774]: audit 2026-03-09T18:37:02.184569+0000 mon.a (mon.0) 955 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:37:03.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:03 vm00 bash[17468]: cluster 2026-03-09T18:37:01.862787+0000 mgr.y (mgr.24880) 76 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:03.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:03 vm00 bash[22468]: cluster 2026-03-09T18:37:01.862787+0000 mgr.y (mgr.24880) 76 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:03.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:03 vm08 bash[17774]: cluster 2026-03-09T18:37:01.862787+0000 mgr.y (mgr.24880) 76 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:05.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:05 vm00 bash[17468]: cluster 2026-03-09T18:37:03.863401+0000 mgr.y (mgr.24880) 77 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:05.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:05 vm00 bash[22468]: cluster 2026-03-09T18:37:03.863401+0000 mgr.y (mgr.24880) 77 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:05.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:05 vm08 bash[17774]: cluster 2026-03-09T18:37:03.863401+0000 mgr.y (mgr.24880) 77 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:07.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:07 vm00 bash[17468]: cluster 2026-03-09T18:37:05.863716+0000 mgr.y (mgr.24880) 78 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:07.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:07 vm00 bash[22468]: cluster 2026-03-09T18:37:05.863716+0000 mgr.y (mgr.24880) 78 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:07.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:07 vm08 bash[17774]: cluster 2026-03-09T18:37:05.863716+0000 mgr.y (mgr.24880) 78 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:09.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:09 vm00 bash[22468]: cluster 2026-03-09T18:37:07.864069+0000 mgr.y (mgr.24880) 79 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:09.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:09 vm00 bash[17468]: cluster 2026-03-09T18:37:07.864069+0000 mgr.y (mgr.24880) 79 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:09.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:37:09 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:37:09] "GET /metrics HTTP/1.1" 200 37538 "" "Prometheus/2.51.0" 2026-03-09T18:37:09.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:09 vm08 bash[17774]: cluster 2026-03-09T18:37:07.864069+0000 mgr.y (mgr.24880) 79 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:10.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:10 vm00 bash[22468]: audit 2026-03-09T18:37:09.663199+0000 mgr.y (mgr.24880) 80 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:37:10.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:10 vm00 bash[17468]: audit 2026-03-09T18:37:09.663199+0000 mgr.y (mgr.24880) 80 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:37:10.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:10 vm08 bash[17774]: audit 2026-03-09T18:37:09.663199+0000 mgr.y (mgr.24880) 80 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:37:11.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:11 vm00 bash[22468]: cluster 2026-03-09T18:37:09.864599+0000 mgr.y (mgr.24880) 81 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:11.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:11 vm00 bash[17468]: cluster 2026-03-09T18:37:09.864599+0000 mgr.y (mgr.24880) 81 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:11.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:11 vm08 bash[17774]: cluster 2026-03-09T18:37:09.864599+0000 mgr.y (mgr.24880) 81 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:13.877 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:13 vm00 bash[22468]: cluster 2026-03-09T18:37:11.864888+0000 mgr.y (mgr.24880) 82 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:13.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:13 vm00 bash[17468]: cluster 2026-03-09T18:37:11.864888+0000 mgr.y (mgr.24880) 82 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:13.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:13 vm08 bash[17774]: cluster 2026-03-09T18:37:11.864888+0000 mgr.y (mgr.24880) 82 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:15.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:15 vm08 bash[17774]: cluster 2026-03-09T18:37:13.865444+0000 mgr.y (mgr.24880) 83 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:16.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:15 vm00 bash[22468]: cluster 2026-03-09T18:37:13.865444+0000 mgr.y (mgr.24880) 83 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:16.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:15 vm00 bash[17468]: cluster 2026-03-09T18:37:13.865444+0000 mgr.y (mgr.24880) 83 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:17.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:17 vm08 bash[17774]: cluster 2026-03-09T18:37:15.865797+0000 mgr.y (mgr.24880) 84 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:17.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:17 vm08 bash[17774]: audit 2026-03-09T18:37:17.185016+0000 mon.a (mon.0) 956 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:37:18.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:17 vm00 bash[22468]: cluster 2026-03-09T18:37:15.865797+0000 mgr.y (mgr.24880) 84 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:18.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:17 vm00 bash[22468]: audit 2026-03-09T18:37:17.185016+0000 mon.a (mon.0) 956 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:37:18.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:17 vm00 bash[17468]: cluster 2026-03-09T18:37:15.865797+0000 mgr.y (mgr.24880) 84 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:18.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:17 vm00 bash[17468]: audit 2026-03-09T18:37:17.185016+0000 mon.a (mon.0) 956 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:37:19.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:19 vm00 bash[22468]: cluster 2026-03-09T18:37:17.866145+0000 mgr.y (mgr.24880) 85 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:19.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:37:19 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:37:19] "GET /metrics HTTP/1.1" 200 37537 "" "Prometheus/2.51.0" 2026-03-09T18:37:19.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:19 vm00 bash[17468]: cluster 2026-03-09T18:37:17.866145+0000 mgr.y (mgr.24880) 85 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:19.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:19 vm08 bash[17774]: cluster 2026-03-09T18:37:17.866145+0000 mgr.y (mgr.24880) 85 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:20.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:20 vm08 bash[17774]: audit 2026-03-09T18:37:19.673910+0000 mgr.y (mgr.24880) 86 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:37:21.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:20 vm00 bash[17468]: audit 2026-03-09T18:37:19.673910+0000 mgr.y (mgr.24880) 86 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:37:21.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:20 vm00 bash[22468]: audit 2026-03-09T18:37:19.673910+0000 mgr.y (mgr.24880) 86 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:37:21.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:21 vm08 bash[17774]: cluster 2026-03-09T18:37:19.866750+0000 mgr.y (mgr.24880) 87 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:22.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:21 vm00 bash[17468]: cluster 2026-03-09T18:37:19.866750+0000 mgr.y (mgr.24880) 87 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:22.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:21 vm00 bash[22468]: cluster 2026-03-09T18:37:19.866750+0000 mgr.y (mgr.24880) 87 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:23.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:23 vm08 bash[17774]: cluster 2026-03-09T18:37:21.867106+0000 mgr.y (mgr.24880) 88 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:24.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:23 vm00 bash[17468]: cluster 2026-03-09T18:37:21.867106+0000 mgr.y (mgr.24880) 88 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:24.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:23 vm00 bash[22468]: cluster 2026-03-09T18:37:21.867106+0000 mgr.y (mgr.24880) 88 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:25.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:25 vm08 bash[17774]: cluster 2026-03-09T18:37:23.867733+0000 mgr.y (mgr.24880) 89 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:26.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:25 vm00 bash[17468]: cluster 2026-03-09T18:37:23.867733+0000 mgr.y (mgr.24880) 89 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:26.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:25 vm00 bash[22468]: cluster 2026-03-09T18:37:23.867733+0000 mgr.y (mgr.24880) 89 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:27.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:27 vm08 bash[17774]: cluster 2026-03-09T18:37:25.867997+0000 mgr.y (mgr.24880) 90 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:28.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:27 vm00 bash[17468]: cluster 2026-03-09T18:37:25.867997+0000 mgr.y (mgr.24880) 90 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:28.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:27 vm00 bash[22468]: cluster 2026-03-09T18:37:25.867997+0000 mgr.y (mgr.24880) 90 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:29.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:29 vm00 bash[17468]: cluster 2026-03-09T18:37:27.868317+0000 mgr.y (mgr.24880) 91 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:29.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:29 vm00 bash[22468]: cluster 2026-03-09T18:37:27.868317+0000 mgr.y (mgr.24880) 91 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:29.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:37:29 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:37:29] "GET /metrics HTTP/1.1" 200 37537 "" "Prometheus/2.51.0" 2026-03-09T18:37:29.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:29 vm08 bash[17774]: cluster 2026-03-09T18:37:27.868317+0000 mgr.y (mgr.24880) 91 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:30.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:30 vm08 bash[17774]: audit 2026-03-09T18:37:29.685439+0000 mgr.y (mgr.24880) 92 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:37:31.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:30 vm00 bash[17468]: audit 2026-03-09T18:37:29.685439+0000 mgr.y (mgr.24880) 92 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:37:31.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:30 vm00 bash[22468]: audit 2026-03-09T18:37:29.685439+0000 mgr.y (mgr.24880) 92 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:37:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:31 vm08 bash[17774]: cluster 2026-03-09T18:37:29.868855+0000 mgr.y (mgr.24880) 93 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:32.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:31 vm00 bash[17468]: cluster 2026-03-09T18:37:29.868855+0000 mgr.y (mgr.24880) 93 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:32.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:31 vm00 bash[22468]: cluster 2026-03-09T18:37:29.868855+0000 mgr.y (mgr.24880) 93 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:32.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:32 vm08 bash[17774]: audit 2026-03-09T18:37:32.185487+0000 mon.a (mon.0) 957 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:37:33.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:32 vm00 bash[17468]: audit 2026-03-09T18:37:32.185487+0000 mon.a (mon.0) 957 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:37:33.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:32 vm00 bash[22468]: audit 2026-03-09T18:37:32.185487+0000 mon.a (mon.0) 957 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:37:33.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:33 vm08 bash[17774]: cluster 2026-03-09T18:37:31.869187+0000 mgr.y (mgr.24880) 94 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:34.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:33 vm00 bash[17468]: cluster 2026-03-09T18:37:31.869187+0000 mgr.y (mgr.24880) 94 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:34.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:33 vm00 bash[22468]: cluster 2026-03-09T18:37:31.869187+0000 mgr.y (mgr.24880) 94 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:35.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:35 vm08 bash[17774]: cluster 2026-03-09T18:37:33.869757+0000 mgr.y (mgr.24880) 95 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:36.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:35 vm00 bash[22468]: cluster 2026-03-09T18:37:33.869757+0000 mgr.y (mgr.24880) 95 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:36.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:35 vm00 bash[17468]: cluster 2026-03-09T18:37:33.869757+0000 mgr.y (mgr.24880) 95 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:37.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:37 vm08 bash[17774]: cluster 2026-03-09T18:37:35.870140+0000 mgr.y (mgr.24880) 96 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:38.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:37 vm00 bash[22468]: cluster 2026-03-09T18:37:35.870140+0000 mgr.y (mgr.24880) 96 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:38.127 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:37 vm00 bash[17468]: cluster 2026-03-09T18:37:35.870140+0000 mgr.y (mgr.24880) 96 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:39.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:39 vm00 bash[22468]: cluster 2026-03-09T18:37:37.870420+0000 mgr.y (mgr.24880) 97 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:39.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:39 vm00 bash[17468]: cluster 2026-03-09T18:37:37.870420+0000 mgr.y (mgr.24880) 97 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:39.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:37:39 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:37:39] "GET /metrics HTTP/1.1" 200 37534 "" "Prometheus/2.51.0" 2026-03-09T18:37:40.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:39 vm08 bash[17774]: cluster 2026-03-09T18:37:37.870420+0000 mgr.y (mgr.24880) 97 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:41.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:40 vm00 bash[22468]: audit 2026-03-09T18:37:39.695994+0000 mgr.y (mgr.24880) 98 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:37:41.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:40 vm00 bash[17468]: audit 2026-03-09T18:37:39.695994+0000 mgr.y (mgr.24880) 98 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:37:41.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:40 vm08 bash[17774]: audit 2026-03-09T18:37:39.695994+0000 mgr.y (mgr.24880) 98 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:37:42.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:41 vm00 bash[22468]: cluster 2026-03-09T18:37:39.870963+0000 mgr.y (mgr.24880) 99 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:42.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:41 vm00 bash[17468]: cluster 2026-03-09T18:37:39.870963+0000 mgr.y (mgr.24880) 99 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:42.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:41 vm08 bash[17774]: cluster 2026-03-09T18:37:39.870963+0000 mgr.y (mgr.24880) 99 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:44.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:43 vm00 bash[22468]: cluster 2026-03-09T18:37:41.871242+0000 mgr.y (mgr.24880) 100 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:44.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:43 vm00 bash[17468]: cluster 2026-03-09T18:37:41.871242+0000 mgr.y (mgr.24880) 100 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:44.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:43 vm08 bash[17774]: cluster 2026-03-09T18:37:41.871242+0000 mgr.y (mgr.24880) 100 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:46.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:45 vm00 bash[17468]: cluster 2026-03-09T18:37:43.871690+0000 mgr.y (mgr.24880) 101 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:46.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:45 vm00 bash[22468]: cluster 2026-03-09T18:37:43.871690+0000 mgr.y (mgr.24880) 101 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:46.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:45 vm08 bash[17774]: cluster 2026-03-09T18:37:43.871690+0000 mgr.y (mgr.24880) 101 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:47.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:46 vm00 bash[22468]: audit 2026-03-09T18:37:45.879954+0000 mon.a (mon.0) 958 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:37:47.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:46 vm00 bash[22468]: audit 2026-03-09T18:37:46.191885+0000 mon.a (mon.0) 959 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:37:47.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:46 vm00 bash[22468]: audit 2026-03-09T18:37:46.192374+0000 mon.a (mon.0) 960 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:37:47.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:46 vm00 bash[22468]: audit 2026-03-09T18:37:46.196960+0000 mon.a (mon.0) 961 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:37:47.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:46 vm00 bash[17468]: audit 2026-03-09T18:37:45.879954+0000 mon.a (mon.0) 958 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:37:47.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:46 vm00 bash[17468]: audit 2026-03-09T18:37:46.191885+0000 mon.a (mon.0) 959 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:37:47.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:46 vm00 bash[17468]: audit 2026-03-09T18:37:46.192374+0000 mon.a (mon.0) 960 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:37:47.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:46 vm00 bash[17468]: audit 2026-03-09T18:37:46.196960+0000 mon.a (mon.0) 961 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:37:47.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:46 vm08 bash[17774]: audit 2026-03-09T18:37:45.879954+0000 mon.a (mon.0) 958 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:37:47.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:46 vm08 bash[17774]: audit 2026-03-09T18:37:46.191885+0000 mon.a (mon.0) 959 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:37:47.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:46 vm08 bash[17774]: audit 2026-03-09T18:37:46.192374+0000 mon.a (mon.0) 960 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:37:47.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:46 vm08 bash[17774]: audit 2026-03-09T18:37:46.196960+0000 mon.a (mon.0) 961 : audit [INF] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' 2026-03-09T18:37:48.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:47 vm00 bash[22468]: cluster 2026-03-09T18:37:45.871987+0000 mgr.y (mgr.24880) 102 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:48.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:47 vm00 bash[22468]: audit 2026-03-09T18:37:47.185286+0000 mon.a (mon.0) 962 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:37:48.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:47 vm00 bash[17468]: cluster 2026-03-09T18:37:45.871987+0000 mgr.y (mgr.24880) 102 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:48.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:47 vm00 bash[17468]: audit 2026-03-09T18:37:47.185286+0000 mon.a (mon.0) 962 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:37:48.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:47 vm08 bash[17774]: cluster 2026-03-09T18:37:45.871987+0000 mgr.y (mgr.24880) 102 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:48.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:47 vm08 bash[17774]: audit 2026-03-09T18:37:47.185286+0000 mon.a (mon.0) 962 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:37:49.775 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:37:49 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:37:49] "GET /metrics HTTP/1.1" 200 37547 "" "Prometheus/2.51.0" 2026-03-09T18:37:50.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:49 vm00 bash[22468]: cluster 2026-03-09T18:37:47.872296+0000 mgr.y (mgr.24880) 103 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:50.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:49 vm00 bash[17468]: cluster 2026-03-09T18:37:47.872296+0000 mgr.y (mgr.24880) 103 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:50.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:49 vm08 bash[17774]: cluster 2026-03-09T18:37:47.872296+0000 mgr.y (mgr.24880) 103 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:51.127 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:50 vm00 bash[22468]: audit 2026-03-09T18:37:49.703423+0000 mgr.y (mgr.24880) 104 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:37:51.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:50 vm00 bash[17468]: audit 2026-03-09T18:37:49.703423+0000 mgr.y (mgr.24880) 104 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:37:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:50 vm08 bash[17774]: audit 2026-03-09T18:37:49.703423+0000 mgr.y (mgr.24880) 104 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:37:52.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:51 vm00 bash[22468]: cluster 2026-03-09T18:37:49.872785+0000 mgr.y (mgr.24880) 105 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:52.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:51 vm00 bash[17468]: cluster 2026-03-09T18:37:49.872785+0000 mgr.y (mgr.24880) 105 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:52.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:51 vm08 bash[17774]: cluster 2026-03-09T18:37:49.872785+0000 mgr.y (mgr.24880) 105 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:53.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:52 vm00 bash[22468]: cluster 2026-03-09T18:37:51.873058+0000 mgr.y (mgr.24880) 106 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:53.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:52 vm00 bash[17468]: cluster 2026-03-09T18:37:51.873058+0000 mgr.y (mgr.24880) 106 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:53.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:52 vm08 bash[17774]: cluster 2026-03-09T18:37:51.873058+0000 mgr.y (mgr.24880) 106 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:55.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:54 vm08 bash[17774]: cluster 2026-03-09T18:37:53.873549+0000 mgr.y (mgr.24880) 107 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:55.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:54 vm00 bash[22468]: cluster 2026-03-09T18:37:53.873549+0000 mgr.y (mgr.24880) 107 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:55.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:54 vm00 bash[17468]: cluster 2026-03-09T18:37:53.873549+0000 mgr.y (mgr.24880) 107 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:37:57.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:57 vm00 bash[22468]: cluster 2026-03-09T18:37:55.873857+0000 mgr.y (mgr.24880) 108 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:57.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:57 vm00 bash[17468]: cluster 2026-03-09T18:37:55.873857+0000 mgr.y (mgr.24880) 108 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:57.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:57 vm08 bash[17774]: cluster 2026-03-09T18:37:55.873857+0000 mgr.y (mgr.24880) 108 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:59.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:37:59 vm00 bash[22468]: cluster 2026-03-09T18:37:57.874123+0000 mgr.y (mgr.24880) 109 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:59.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:37:59 vm00 bash[17468]: cluster 2026-03-09T18:37:57.874123+0000 mgr.y (mgr.24880) 109 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:37:59.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:37:59 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:37:59] "GET /metrics HTTP/1.1" 200 37547 "" "Prometheus/2.51.0" 2026-03-09T18:37:59.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:37:59 vm08 bash[17774]: cluster 2026-03-09T18:37:57.874123+0000 mgr.y (mgr.24880) 109 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:00.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:00 vm00 bash[22468]: audit 2026-03-09T18:37:59.707887+0000 mgr.y (mgr.24880) 110 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:38:00.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:00 vm00 bash[17468]: audit 2026-03-09T18:37:59.707887+0000 mgr.y (mgr.24880) 110 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:38:00.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:00 vm08 bash[17774]: audit 2026-03-09T18:37:59.707887+0000 mgr.y (mgr.24880) 110 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:38:01.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:01 vm00 bash[22468]: cluster 2026-03-09T18:37:59.874551+0000 mgr.y (mgr.24880) 111 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:38:01.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:01 vm00 bash[17468]: cluster 2026-03-09T18:37:59.874551+0000 mgr.y (mgr.24880) 111 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:38:01.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:01 vm08 bash[17774]: cluster 2026-03-09T18:37:59.874551+0000 mgr.y (mgr.24880) 111 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:38:02.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:02 vm00 bash[22468]: audit 2026-03-09T18:38:02.185583+0000 mon.a (mon.0) 963 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:38:02.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:02 vm00 bash[17468]: audit 2026-03-09T18:38:02.185583+0000 mon.a (mon.0) 963 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:38:02.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:02 vm08 bash[17774]: audit 2026-03-09T18:38:02.185583+0000 mon.a (mon.0) 963 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:38:03.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:03 vm00 bash[22468]: cluster 2026-03-09T18:38:01.874853+0000 mgr.y (mgr.24880) 112 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:03.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:03 vm00 bash[17468]: cluster 2026-03-09T18:38:01.874853+0000 mgr.y (mgr.24880) 112 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:03.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:03 vm08 bash[17774]: cluster 2026-03-09T18:38:01.874853+0000 mgr.y (mgr.24880) 112 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:05.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:05 vm00 bash[22468]: cluster 2026-03-09T18:38:03.875441+0000 mgr.y (mgr.24880) 113 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:38:05.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:05 vm00 bash[17468]: cluster 2026-03-09T18:38:03.875441+0000 mgr.y (mgr.24880) 113 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:38:05.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:05 vm08 bash[17774]: cluster 2026-03-09T18:38:03.875441+0000 mgr.y (mgr.24880) 113 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:38:07.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:07 vm00 bash[22468]: cluster 2026-03-09T18:38:05.875733+0000 mgr.y (mgr.24880) 114 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:07.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:07 vm00 bash[17468]: cluster 2026-03-09T18:38:05.875733+0000 mgr.y (mgr.24880) 114 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:07.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:07 vm08 bash[17774]: cluster 2026-03-09T18:38:05.875733+0000 mgr.y (mgr.24880) 114 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:09.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:09 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:38:09] "GET /metrics HTTP/1.1" 200 37551 "" "Prometheus/2.51.0" 2026-03-09T18:38:09.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:09 vm00 bash[17468]: cluster 2026-03-09T18:38:07.876116+0000 mgr.y (mgr.24880) 115 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:09.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:09 vm00 bash[22468]: cluster 2026-03-09T18:38:07.876116+0000 mgr.y (mgr.24880) 115 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:09.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:09 vm08 bash[17774]: cluster 2026-03-09T18:38:07.876116+0000 mgr.y (mgr.24880) 115 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:10.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:10 vm00 bash[22468]: audit 2026-03-09T18:38:09.709377+0000 mgr.y (mgr.24880) 116 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:38:10.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:10 vm00 bash[17468]: audit 2026-03-09T18:38:09.709377+0000 mgr.y (mgr.24880) 116 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:38:10.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:10 vm08 bash[17774]: audit 2026-03-09T18:38:09.709377+0000 mgr.y (mgr.24880) 116 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:38:11.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:11 vm00 bash[22468]: cluster 2026-03-09T18:38:09.876654+0000 mgr.y (mgr.24880) 117 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:38:11.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:11 vm00 bash[17468]: cluster 2026-03-09T18:38:09.876654+0000 mgr.y (mgr.24880) 117 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:38:11.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:11 vm08 bash[17774]: cluster 2026-03-09T18:38:09.876654+0000 mgr.y (mgr.24880) 117 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:38:13.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:13 vm08 bash[17774]: cluster 2026-03-09T18:38:11.877085+0000 mgr.y (mgr.24880) 118 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:14.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:13 vm00 bash[22468]: cluster 2026-03-09T18:38:11.877085+0000 mgr.y (mgr.24880) 118 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:14.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:13 vm00 bash[17468]: cluster 2026-03-09T18:38:11.877085+0000 mgr.y (mgr.24880) 118 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:15.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:15 vm08 bash[17774]: cluster 2026-03-09T18:38:13.877610+0000 mgr.y (mgr.24880) 119 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:38:16.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:15 vm00 bash[22468]: cluster 2026-03-09T18:38:13.877610+0000 mgr.y (mgr.24880) 119 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:38:16.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:15 vm00 bash[17468]: cluster 2026-03-09T18:38:13.877610+0000 mgr.y (mgr.24880) 119 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:38:17.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:17 vm08 bash[17774]: cluster 2026-03-09T18:38:15.877934+0000 mgr.y (mgr.24880) 120 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:17.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:17 vm08 bash[17774]: audit 2026-03-09T18:38:17.185848+0000 mon.a (mon.0) 964 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:38:18.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:17 vm00 bash[22468]: cluster 2026-03-09T18:38:15.877934+0000 mgr.y (mgr.24880) 120 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:18.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:17 vm00 bash[22468]: audit 2026-03-09T18:38:17.185848+0000 mon.a (mon.0) 964 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:38:18.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:17 vm00 bash[17468]: cluster 2026-03-09T18:38:15.877934+0000 mgr.y (mgr.24880) 120 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:18.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:17 vm00 bash[17468]: audit 2026-03-09T18:38:17.185848+0000 mon.a (mon.0) 964 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:38:19.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:19 vm00 bash[22468]: cluster 2026-03-09T18:38:17.878206+0000 mgr.y (mgr.24880) 121 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:19.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:19 vm00 bash[17468]: cluster 2026-03-09T18:38:17.878206+0000 mgr.y (mgr.24880) 121 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:19.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:19 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:38:19] "GET /metrics HTTP/1.1" 200 37551 "" "Prometheus/2.51.0" 2026-03-09T18:38:19.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:19 vm08 bash[17774]: cluster 2026-03-09T18:38:17.878206+0000 mgr.y (mgr.24880) 121 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:20.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:20 vm08 bash[17774]: audit 2026-03-09T18:38:19.717566+0000 mgr.y (mgr.24880) 122 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:38:21.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:20 vm00 bash[22468]: audit 2026-03-09T18:38:19.717566+0000 mgr.y (mgr.24880) 122 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:38:21.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:20 vm00 bash[17468]: audit 2026-03-09T18:38:19.717566+0000 mgr.y (mgr.24880) 122 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:38:21.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:21 vm08 bash[17774]: cluster 2026-03-09T18:38:19.878761+0000 mgr.y (mgr.24880) 123 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:38:22.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:21 vm00 bash[22468]: cluster 2026-03-09T18:38:19.878761+0000 mgr.y (mgr.24880) 123 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:38:22.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:21 vm00 bash[17468]: cluster 2026-03-09T18:38:19.878761+0000 mgr.y (mgr.24880) 123 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:38:23.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:23 vm08 bash[17774]: cluster 2026-03-09T18:38:21.879121+0000 mgr.y (mgr.24880) 124 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:24.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:23 vm00 bash[22468]: cluster 2026-03-09T18:38:21.879121+0000 mgr.y (mgr.24880) 124 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:24.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:23 vm00 bash[17468]: cluster 2026-03-09T18:38:21.879121+0000 mgr.y (mgr.24880) 124 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:25.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:25 vm08 bash[17774]: cluster 2026-03-09T18:38:23.879611+0000 mgr.y (mgr.24880) 125 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:38:26.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:25 vm00 bash[22468]: cluster 2026-03-09T18:38:23.879611+0000 mgr.y (mgr.24880) 125 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:38:26.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:25 vm00 bash[17468]: cluster 2026-03-09T18:38:23.879611+0000 mgr.y (mgr.24880) 125 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:38:27.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:27 vm08 bash[17774]: cluster 2026-03-09T18:38:25.879877+0000 mgr.y (mgr.24880) 126 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:28.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:27 vm00 bash[22468]: cluster 2026-03-09T18:38:25.879877+0000 mgr.y (mgr.24880) 126 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:28.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:27 vm00 bash[17468]: cluster 2026-03-09T18:38:25.879877+0000 mgr.y (mgr.24880) 126 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:29.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:29 vm00 bash[22468]: cluster 2026-03-09T18:38:27.880175+0000 mgr.y (mgr.24880) 127 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:29.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:29 vm00 bash[17468]: cluster 2026-03-09T18:38:27.880175+0000 mgr.y (mgr.24880) 127 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:29.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:29 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:38:29] "GET /metrics HTTP/1.1" 200 37551 "" "Prometheus/2.51.0" 2026-03-09T18:38:29.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:29 vm08 bash[17774]: cluster 2026-03-09T18:38:27.880175+0000 mgr.y (mgr.24880) 127 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:31.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:30 vm00 bash[22468]: audit 2026-03-09T18:38:29.720924+0000 mgr.y (mgr.24880) 128 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:38:31.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:30 vm00 bash[17468]: audit 2026-03-09T18:38:29.720924+0000 mgr.y (mgr.24880) 128 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:38:31.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:30 vm08 bash[17774]: audit 2026-03-09T18:38:29.720924+0000 mgr.y (mgr.24880) 128 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:38:32.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:31 vm00 bash[22468]: cluster 2026-03-09T18:38:29.880768+0000 mgr.y (mgr.24880) 129 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:38:32.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:31 vm00 bash[17468]: cluster 2026-03-09T18:38:29.880768+0000 mgr.y (mgr.24880) 129 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:38:32.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:31 vm08 bash[17774]: cluster 2026-03-09T18:38:29.880768+0000 mgr.y (mgr.24880) 129 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:38:32.231 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-09T18:38:32.675 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:38:32.675 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (8m) 2m ago 15m 14.5M - 0.25.0 c8568f914cd2 2a8a29ecee54 2026-03-09T18:38:32.675 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (8m) 2m ago 15m 39.6M - dad864ee21e9 b6a0baf6efb9 2026-03-09T18:38:32.675 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (2m) 2m ago 15m 41.2M - 3.5 e1d6a67b021e c19e19fc9de1 2026-03-09T18:38:32.675 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443,9283 running (10m) 2m ago 18m 463M - 19.2.3-678-ge911bdeb 654f31e6858e c24396cb6839 2026-03-09T18:38:32.675 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:8443,9283,8765 running (5m) 2m ago 19m 516M - 19.2.3-678-ge911bdeb 654f31e6858e 838266ced7b2 2026-03-09T18:38:32.675 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (19m) 2m ago 19m 60.8M 2048M 17.2.0 e1d6a67b021e 819e8890799a 2026-03-09T18:38:32.675 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (18m) 2m ago 18m 49.5M 2048M 17.2.0 e1d6a67b021e 5b51a6d0bbdd 2026-03-09T18:38:32.675 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (18m) 2m ago 18m 49.9M 2048M 17.2.0 e1d6a67b021e a82073bc5d9c 2026-03-09T18:38:32.675 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (8m) 2m ago 15m 7648k - 1.7.0 72c9c2088986 c2e3e3202fde 2026-03-09T18:38:32.675 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (8m) 2m ago 15m 7659k - 1.7.0 72c9c2088986 7a7d1ed8c801 2026-03-09T18:38:32.675 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (18m) 2m ago 18m 50.9M 4096M 17.2.0 e1d6a67b021e ab692a994bc3 2026-03-09T18:38:32.675 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (17m) 2m ago 17m 53.3M 4096M 17.2.0 e1d6a67b021e d8607e6df9d6 2026-03-09T18:38:32.675 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (17m) 2m ago 17m 47.9M 4096M 17.2.0 e1d6a67b021e 35e072ab4c22 2026-03-09T18:38:32.675 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (17m) 2m ago 17m 53.1M 4096M 17.2.0 e1d6a67b021e 306d680cc55b 2026-03-09T18:38:32.675 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (17m) 2m ago 17m 52.0M 4096M 17.2.0 e1d6a67b021e 781045b06a16 2026-03-09T18:38:32.675 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (16m) 2m ago 16m 51.5M 4096M 17.2.0 e1d6a67b021e 28b71e1b7c1b 2026-03-09T18:38:32.675 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (16m) 2m ago 16m 49.8M 4096M 17.2.0 e1d6a67b021e 80e1a58dd2f5 2026-03-09T18:38:32.675 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (16m) 2m ago 16m 50.7M 4096M 17.2.0 e1d6a67b021e 4f91765b51cf 2026-03-09T18:38:32.675 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (2m) 2m ago 15m 37.7M - 2.51.0 1d3b7f56885b 63f401925c36 2026-03-09T18:38:32.675 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (15m) 2m ago 15m 86.5M - 17.2.0 e1d6a67b021e 671fa80b7e00 2026-03-09T18:38:32.675 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (15m) 2m ago 15m 87.0M - 17.2.0 e1d6a67b021e 1fbcce983317 2026-03-09T18:38:32.722 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions' 2026-03-09T18:38:32.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:32 vm00 bash[22468]: audit 2026-03-09T18:38:32.186323+0000 mon.a (mon.0) 965 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:38:32.974 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:32 vm00 bash[17468]: audit 2026-03-09T18:38:32.186323+0000 mon.a (mon.0) 965 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:38:33.207 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:38:33.208 INFO:teuthology.orchestra.run.vm00.stdout: "mon": { 2026-03-09T18:38:33.208 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-09T18:38:33.208 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:38:33.208 INFO:teuthology.orchestra.run.vm00.stdout: "mgr": { 2026-03-09T18:38:33.208 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:38:33.208 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:38:33.208 INFO:teuthology.orchestra.run.vm00.stdout: "osd": { 2026-03-09T18:38:33.208 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-09T18:38:33.208 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:38:33.208 INFO:teuthology.orchestra.run.vm00.stdout: "mds": {}, 2026-03-09T18:38:33.208 INFO:teuthology.orchestra.run.vm00.stdout: "rgw": { 2026-03-09T18:38:33.208 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-09T18:38:33.208 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:38:33.208 INFO:teuthology.orchestra.run.vm00.stdout: "overall": { 2026-03-09T18:38:33.208 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 13, 2026-03-09T18:38:33.208 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:38:33.208 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:38:33.208 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:38:33.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:32 vm08 bash[17774]: audit 2026-03-09T18:38:32.186323+0000 mon.a (mon.0) 965 : audit [DBG] from='mgr.24880 192.168.123.100:0/1809614007' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:38:33.259 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph -s' 2026-03-09T18:38:33.726 INFO:teuthology.orchestra.run.vm00.stdout: cluster: 2026-03-09T18:38:33.726 INFO:teuthology.orchestra.run.vm00.stdout: id: 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:38:33.726 INFO:teuthology.orchestra.run.vm00.stdout: health: HEALTH_OK 2026-03-09T18:38:33.726 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:38:33.726 INFO:teuthology.orchestra.run.vm00.stdout: services: 2026-03-09T18:38:33.726 INFO:teuthology.orchestra.run.vm00.stdout: mon: 3 daemons, quorum a,c,b (age 18m) 2026-03-09T18:38:33.726 INFO:teuthology.orchestra.run.vm00.stdout: mgr: y(active, since 3m), standbys: x 2026-03-09T18:38:33.726 INFO:teuthology.orchestra.run.vm00.stdout: osd: 8 osds: 8 up (since 16m), 8 in (since 16m) 2026-03-09T18:38:33.726 INFO:teuthology.orchestra.run.vm00.stdout: rgw: 2 daemons active (2 hosts, 1 zones) 2026-03-09T18:38:33.726 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:38:33.726 INFO:teuthology.orchestra.run.vm00.stdout: data: 2026-03-09T18:38:33.726 INFO:teuthology.orchestra.run.vm00.stdout: pools: 6 pools, 161 pgs 2026-03-09T18:38:33.726 INFO:teuthology.orchestra.run.vm00.stdout: objects: 209 objects, 457 KiB 2026-03-09T18:38:33.726 INFO:teuthology.orchestra.run.vm00.stdout: usage: 96 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:38:33.726 INFO:teuthology.orchestra.run.vm00.stdout: pgs: 161 active+clean 2026-03-09T18:38:33.726 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:38:33.726 INFO:teuthology.orchestra.run.vm00.stdout: io: 2026-03-09T18:38:33.726 INFO:teuthology.orchestra.run.vm00.stdout: client: 853 B/s rd, 0 op/s rd, 0 op/s wr 2026-03-09T18:38:33.726 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-09T18:38:33.750 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:33 vm00 bash[22468]: cluster 2026-03-09T18:38:31.881106+0000 mgr.y (mgr.24880) 130 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:33.750 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:33 vm00 bash[22468]: audit 2026-03-09T18:38:32.675513+0000 mgr.y (mgr.24880) 131 : audit [DBG] from='client.24958 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:38:33.750 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:33 vm00 bash[22468]: audit 2026-03-09T18:38:33.211709+0000 mon.a (mon.0) 966 : audit [DBG] from='client.? 192.168.123.100:0/3651884731' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:38:33.750 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:33 vm00 bash[22468]: audit 2026-03-09T18:38:33.726487+0000 mon.b (mon.2) 134 : audit [DBG] from='client.? 192.168.123.100:0/2440526523' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T18:38:33.750 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:33 vm00 bash[17468]: cluster 2026-03-09T18:38:31.881106+0000 mgr.y (mgr.24880) 130 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:33.750 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:33 vm00 bash[17468]: audit 2026-03-09T18:38:32.675513+0000 mgr.y (mgr.24880) 131 : audit [DBG] from='client.24958 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:38:33.750 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:33 vm00 bash[17468]: audit 2026-03-09T18:38:33.211709+0000 mon.a (mon.0) 966 : audit [DBG] from='client.? 192.168.123.100:0/3651884731' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:38:33.750 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:33 vm00 bash[17468]: audit 2026-03-09T18:38:33.726487+0000 mon.b (mon.2) 134 : audit [DBG] from='client.? 192.168.123.100:0/2440526523' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T18:38:33.781 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph health detail' 2026-03-09T18:38:34.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:33 vm08 bash[17774]: cluster 2026-03-09T18:38:31.881106+0000 mgr.y (mgr.24880) 130 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:38:34.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:33 vm08 bash[17774]: audit 2026-03-09T18:38:32.675513+0000 mgr.y (mgr.24880) 131 : audit [DBG] from='client.24958 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:38:34.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:33 vm08 bash[17774]: audit 2026-03-09T18:38:33.211709+0000 mon.a (mon.0) 966 : audit [DBG] from='client.? 192.168.123.100:0/3651884731' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:38:34.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:33 vm08 bash[17774]: audit 2026-03-09T18:38:33.726487+0000 mon.b (mon.2) 134 : audit [DBG] from='client.? 192.168.123.100:0/2440526523' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T18:38:34.256 INFO:teuthology.orchestra.run.vm00.stdout:HEALTH_OK 2026-03-09T18:38:34.306 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.mgr | length == 1'"'"'' 2026-03-09T18:38:34.805 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:38:34.806 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:34 vm00 bash[22468]: audit 2026-03-09T18:38:34.260225+0000 mon.c (mon.1) 143 : audit [DBG] from='client.? 192.168.123.100:0/2877917929' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:38:34.807 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:34 vm00 bash[17468]: audit 2026-03-09T18:38:34.260225+0000 mon.c (mon.1) 143 : audit [DBG] from='client.? 192.168.123.100:0/2877917929' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:38:34.869 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph mgr fail' 2026-03-09T18:38:35.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:34 vm08 bash[17774]: audit 2026-03-09T18:38:34.260225+0000 mon.c (mon.1) 143 : audit [DBG] from='client.? 192.168.123.100:0/2877917929' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:38:35.869 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'sleep 180' 2026-03-09T18:38:36.035 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:35 vm00 bash[22468]: cluster 2026-03-09T18:38:33.881646+0000 mgr.y (mgr.24880) 132 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:38:36.035 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:35 vm00 bash[22468]: audit 2026-03-09T18:38:34.798472+0000 mon.c (mon.1) 144 : audit [DBG] from='client.? 192.168.123.100:0/4830588' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:38:36.035 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:35 vm00 bash[22468]: audit 2026-03-09T18:38:35.350880+0000 mon.a (mon.0) 967 : audit [INF] from='client.? 192.168.123.100:0/2127510344' entity='client.admin' cmd=[{"prefix": "mgr fail"}]: dispatch 2026-03-09T18:38:36.035 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:35 vm00 bash[22468]: cluster 2026-03-09T18:38:35.356759+0000 mon.a (mon.0) 968 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-09T18:38:36.036 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:35 vm00 bash[17468]: cluster 2026-03-09T18:38:33.881646+0000 mgr.y (mgr.24880) 132 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:38:36.036 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:35 vm00 bash[17468]: audit 2026-03-09T18:38:34.798472+0000 mon.c (mon.1) 144 : audit [DBG] from='client.? 192.168.123.100:0/4830588' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:38:36.036 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:35 vm00 bash[17468]: audit 2026-03-09T18:38:35.350880+0000 mon.a (mon.0) 967 : audit [INF] from='client.? 192.168.123.100:0/2127510344' entity='client.admin' cmd=[{"prefix": "mgr fail"}]: dispatch 2026-03-09T18:38:36.036 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:35 vm00 bash[17468]: cluster 2026-03-09T18:38:35.356759+0000 mon.a (mon.0) 968 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-09T18:38:36.036 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:35 vm00 bash[53976]: ignoring --setuser ceph since I am not root 2026-03-09T18:38:36.036 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:35 vm00 bash[53976]: ignoring --setgroup ceph since I am not root 2026-03-09T18:38:36.036 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:35 vm00 bash[53976]: debug 2026-03-09T18:38:35.840+0000 7f7a7d566640 1 -- 192.168.123.100:0/2774178413 <== mon.2 v2:192.168.123.108:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x5615587d34a0 con 0x5615587d5800 2026-03-09T18:38:36.036 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:35 vm00 bash[53976]: debug 2026-03-09T18:38:35.912+0000 7f7a7fdc3140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T18:38:36.036 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:35 vm00 bash[53976]: debug 2026-03-09T18:38:35.948+0000 7f7a7fdc3140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T18:38:36.128 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:36 vm00 bash[53976]: debug 2026-03-09T18:38:36.100+0000 7f7a7fdc3140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T18:38:36.142 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:35 vm08 bash[17774]: cluster 2026-03-09T18:38:33.881646+0000 mgr.y (mgr.24880) 132 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:38:36.142 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:35 vm08 bash[17774]: audit 2026-03-09T18:38:34.798472+0000 mon.c (mon.1) 144 : audit [DBG] from='client.? 192.168.123.100:0/4830588' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:38:36.142 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:35 vm08 bash[17774]: audit 2026-03-09T18:38:35.350880+0000 mon.a (mon.0) 967 : audit [INF] from='client.? 192.168.123.100:0/2127510344' entity='client.admin' cmd=[{"prefix": "mgr fail"}]: dispatch 2026-03-09T18:38:36.142 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:35 vm08 bash[17774]: cluster 2026-03-09T18:38:35.356759+0000 mon.a (mon.0) 968 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-09T18:38:36.142 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:38:35 vm08 bash[36576]: [09/Mar/2026:18:38:35] ENGINE Bus STOPPING 2026-03-09T18:38:36.407 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:38:36 vm08 bash[36576]: [09/Mar/2026:18:38:36] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T18:38:36.407 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:38:36 vm08 bash[36576]: [09/Mar/2026:18:38:36] ENGINE Bus STOPPED 2026-03-09T18:38:36.407 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:38:36 vm08 bash[36576]: [09/Mar/2026:18:38:36] ENGINE Bus STARTING 2026-03-09T18:38:36.407 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:38:36 vm08 bash[36576]: [09/Mar/2026:18:38:36] ENGINE Serving on http://:::9283 2026-03-09T18:38:36.407 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:38:36 vm08 bash[36576]: [09/Mar/2026:18:38:36] ENGINE Bus STARTED 2026-03-09T18:38:36.628 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:36 vm00 bash[53976]: debug 2026-03-09T18:38:36.424+0000 7f7a7fdc3140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T18:38:37.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:36 vm00 bash[22468]: audit 2026-03-09T18:38:35.775669+0000 mon.a (mon.0) 969 : audit [INF] from='client.? 192.168.123.100:0/2127510344' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished 2026-03-09T18:38:37.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:36 vm00 bash[22468]: cluster 2026-03-09T18:38:35.775720+0000 mon.a (mon.0) 970 : cluster [DBG] mgrmap e33: x(active, starting, since 0.423264s) 2026-03-09T18:38:37.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:36 vm00 bash[22468]: audit 2026-03-09T18:38:35.800768+0000 mon.a (mon.0) 971 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:38:37.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:36 vm00 bash[22468]: audit 2026-03-09T18:38:35.800876+0000 mon.a (mon.0) 972 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:38:37.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:36 vm00 bash[22468]: audit 2026-03-09T18:38:35.800932+0000 mon.a (mon.0) 973 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:38:37.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:36 vm00 bash[22468]: audit 2026-03-09T18:38:35.800994+0000 mon.a (mon.0) 974 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:38:37.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:36 vm00 bash[22468]: audit 2026-03-09T18:38:35.801091+0000 mon.a (mon.0) 975 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:38:37.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:36 vm00 bash[22468]: audit 2026-03-09T18:38:35.801175+0000 mon.a (mon.0) 976 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:38:37.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:36 vm00 bash[22468]: audit 2026-03-09T18:38:35.801242+0000 mon.a (mon.0) 977 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:38:37.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:36 vm00 bash[22468]: audit 2026-03-09T18:38:35.801443+0000 mon.a (mon.0) 978 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:38:37.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:36 vm00 bash[22468]: audit 2026-03-09T18:38:35.801518+0000 mon.a (mon.0) 979 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:38:37.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:36 vm00 bash[22468]: audit 2026-03-09T18:38:35.801579+0000 mon.a (mon.0) 980 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:38:37.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:36 vm00 bash[22468]: audit 2026-03-09T18:38:35.801636+0000 mon.a (mon.0) 981 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:38:37.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:36 vm00 bash[22468]: audit 2026-03-09T18:38:35.801693+0000 mon.a (mon.0) 982 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:36 vm00 bash[22468]: audit 2026-03-09T18:38:35.801753+0000 mon.a (mon.0) 983 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:36 vm00 bash[22468]: audit 2026-03-09T18:38:35.801790+0000 mon.a (mon.0) 984 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:36 vm00 bash[22468]: audit 2026-03-09T18:38:35.801958+0000 mon.a (mon.0) 985 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:36 vm00 bash[22468]: cluster 2026-03-09T18:38:36.185527+0000 mon.a (mon.0) 986 : cluster [INF] Manager daemon x is now available 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:36 vm00 bash[22468]: audit 2026-03-09T18:38:36.210290+0000 mon.a (mon.0) 987 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:36 vm00 bash[22468]: audit 2026-03-09T18:38:36.229009+0000 mon.a (mon.0) 988 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:36 vm00 bash[22468]: audit 2026-03-09T18:38:36.232785+0000 mon.a (mon.0) 989 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:36 vm00 bash[22468]: audit 2026-03-09T18:38:36.285453+0000 mon.a (mon.0) 990 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:36 vm00 bash[17468]: audit 2026-03-09T18:38:35.775669+0000 mon.a (mon.0) 969 : audit [INF] from='client.? 192.168.123.100:0/2127510344' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:36 vm00 bash[17468]: cluster 2026-03-09T18:38:35.775720+0000 mon.a (mon.0) 970 : cluster [DBG] mgrmap e33: x(active, starting, since 0.423264s) 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:36 vm00 bash[17468]: audit 2026-03-09T18:38:35.800768+0000 mon.a (mon.0) 971 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:36 vm00 bash[17468]: audit 2026-03-09T18:38:35.800876+0000 mon.a (mon.0) 972 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:36 vm00 bash[17468]: audit 2026-03-09T18:38:35.800932+0000 mon.a (mon.0) 973 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:36 vm00 bash[17468]: audit 2026-03-09T18:38:35.800994+0000 mon.a (mon.0) 974 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:36 vm00 bash[17468]: audit 2026-03-09T18:38:35.801091+0000 mon.a (mon.0) 975 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:36 vm00 bash[17468]: audit 2026-03-09T18:38:35.801175+0000 mon.a (mon.0) 976 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:36 vm00 bash[17468]: audit 2026-03-09T18:38:35.801242+0000 mon.a (mon.0) 977 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:36 vm00 bash[17468]: audit 2026-03-09T18:38:35.801443+0000 mon.a (mon.0) 978 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:36 vm00 bash[17468]: audit 2026-03-09T18:38:35.801518+0000 mon.a (mon.0) 979 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:36 vm00 bash[17468]: audit 2026-03-09T18:38:35.801579+0000 mon.a (mon.0) 980 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:36 vm00 bash[17468]: audit 2026-03-09T18:38:35.801636+0000 mon.a (mon.0) 981 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:36 vm00 bash[17468]: audit 2026-03-09T18:38:35.801693+0000 mon.a (mon.0) 982 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:36 vm00 bash[17468]: audit 2026-03-09T18:38:35.801753+0000 mon.a (mon.0) 983 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:36 vm00 bash[17468]: audit 2026-03-09T18:38:35.801790+0000 mon.a (mon.0) 984 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:36 vm00 bash[17468]: audit 2026-03-09T18:38:35.801958+0000 mon.a (mon.0) 985 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:36 vm00 bash[17468]: cluster 2026-03-09T18:38:36.185527+0000 mon.a (mon.0) 986 : cluster [INF] Manager daemon x is now available 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:36 vm00 bash[17468]: audit 2026-03-09T18:38:36.210290+0000 mon.a (mon.0) 987 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:36 vm00 bash[17468]: audit 2026-03-09T18:38:36.229009+0000 mon.a (mon.0) 988 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:36 vm00 bash[17468]: audit 2026-03-09T18:38:36.232785+0000 mon.a (mon.0) 989 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:36 vm00 bash[17468]: audit 2026-03-09T18:38:36.285453+0000 mon.a (mon.0) 990 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:36 vm00 bash[53976]: debug 2026-03-09T18:38:36.908+0000 7f7a7fdc3140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T18:38:37.129 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:37 vm00 bash[53976]: debug 2026-03-09T18:38:37.004+0000 7f7a7fdc3140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T18:38:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:36 vm08 bash[17774]: audit 2026-03-09T18:38:35.775669+0000 mon.a (mon.0) 969 : audit [INF] from='client.? 192.168.123.100:0/2127510344' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished 2026-03-09T18:38:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:36 vm08 bash[17774]: cluster 2026-03-09T18:38:35.775720+0000 mon.a (mon.0) 970 : cluster [DBG] mgrmap e33: x(active, starting, since 0.423264s) 2026-03-09T18:38:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:36 vm08 bash[17774]: audit 2026-03-09T18:38:35.800768+0000 mon.a (mon.0) 971 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:38:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:36 vm08 bash[17774]: audit 2026-03-09T18:38:35.800876+0000 mon.a (mon.0) 972 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:38:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:36 vm08 bash[17774]: audit 2026-03-09T18:38:35.800932+0000 mon.a (mon.0) 973 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:38:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:36 vm08 bash[17774]: audit 2026-03-09T18:38:35.800994+0000 mon.a (mon.0) 974 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:38:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:36 vm08 bash[17774]: audit 2026-03-09T18:38:35.801091+0000 mon.a (mon.0) 975 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:38:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:36 vm08 bash[17774]: audit 2026-03-09T18:38:35.801175+0000 mon.a (mon.0) 976 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:38:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:36 vm08 bash[17774]: audit 2026-03-09T18:38:35.801242+0000 mon.a (mon.0) 977 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:38:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:36 vm08 bash[17774]: audit 2026-03-09T18:38:35.801443+0000 mon.a (mon.0) 978 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:38:37.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:36 vm08 bash[17774]: audit 2026-03-09T18:38:35.801518+0000 mon.a (mon.0) 979 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:38:37.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:36 vm08 bash[17774]: audit 2026-03-09T18:38:35.801579+0000 mon.a (mon.0) 980 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:38:37.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:36 vm08 bash[17774]: audit 2026-03-09T18:38:35.801636+0000 mon.a (mon.0) 981 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:38:37.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:36 vm08 bash[17774]: audit 2026-03-09T18:38:35.801693+0000 mon.a (mon.0) 982 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:38:37.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:36 vm08 bash[17774]: audit 2026-03-09T18:38:35.801753+0000 mon.a (mon.0) 983 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:38:37.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:36 vm08 bash[17774]: audit 2026-03-09T18:38:35.801790+0000 mon.a (mon.0) 984 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:38:37.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:36 vm08 bash[17774]: audit 2026-03-09T18:38:35.801958+0000 mon.a (mon.0) 985 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:38:37.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:36 vm08 bash[17774]: cluster 2026-03-09T18:38:36.185527+0000 mon.a (mon.0) 986 : cluster [INF] Manager daemon x is now available 2026-03-09T18:38:37.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:36 vm08 bash[17774]: audit 2026-03-09T18:38:36.210290+0000 mon.a (mon.0) 987 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:38:37.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:36 vm08 bash[17774]: audit 2026-03-09T18:38:36.229009+0000 mon.a (mon.0) 988 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:38:37.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:36 vm08 bash[17774]: audit 2026-03-09T18:38:36.232785+0000 mon.a (mon.0) 989 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:38:37.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:36 vm08 bash[17774]: audit 2026-03-09T18:38:36.285453+0000 mon.a (mon.0) 990 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-09T18:38:37.401 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:37 vm00 bash[53976]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T18:38:37.401 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:37 vm00 bash[53976]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T18:38:37.401 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:37 vm00 bash[53976]: from numpy import show_config as show_numpy_config 2026-03-09T18:38:37.401 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:37 vm00 bash[53976]: debug 2026-03-09T18:38:37.140+0000 7f7a7fdc3140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T18:38:37.401 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:37 vm00 bash[53976]: debug 2026-03-09T18:38:37.284+0000 7f7a7fdc3140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T18:38:37.401 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:37 vm00 bash[53976]: debug 2026-03-09T18:38:37.328+0000 7f7a7fdc3140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T18:38:37.401 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:37 vm00 bash[53976]: debug 2026-03-09T18:38:37.364+0000 7f7a7fdc3140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T18:38:37.820 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:37 vm00 bash[53976]: debug 2026-03-09T18:38:37.404+0000 7f7a7fdc3140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T18:38:37.820 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:37 vm00 bash[53976]: debug 2026-03-09T18:38:37.452+0000 7f7a7fdc3140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T18:38:38.096 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:37 vm00 bash[22468]: cluster 2026-03-09T18:38:36.816146+0000 mon.a (mon.0) 991 : cluster [DBG] mgrmap e34: x(active, since 1.46368s) 2026-03-09T18:38:38.096 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:37 vm00 bash[22468]: cluster 2026-03-09T18:38:36.846408+0000 mgr.x (mgr.24833) 1 : cluster [DBG] pgmap v3: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:38:38.096 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:37 vm00 bash[17468]: cluster 2026-03-09T18:38:36.816146+0000 mon.a (mon.0) 991 : cluster [DBG] mgrmap e34: x(active, since 1.46368s) 2026-03-09T18:38:38.096 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:37 vm00 bash[17468]: cluster 2026-03-09T18:38:36.846408+0000 mgr.x (mgr.24833) 1 : cluster [DBG] pgmap v3: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:38:38.096 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:37 vm00 bash[53976]: debug 2026-03-09T18:38:37.876+0000 7f7a7fdc3140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T18:38:38.096 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:37 vm00 bash[53976]: debug 2026-03-09T18:38:37.912+0000 7f7a7fdc3140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T18:38:38.096 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:37 vm00 bash[53976]: debug 2026-03-09T18:38:37.952+0000 7f7a7fdc3140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T18:38:38.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:37 vm08 bash[17774]: cluster 2026-03-09T18:38:36.816146+0000 mon.a (mon.0) 991 : cluster [DBG] mgrmap e34: x(active, since 1.46368s) 2026-03-09T18:38:38.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:37 vm08 bash[17774]: cluster 2026-03-09T18:38:36.846408+0000 mgr.x (mgr.24833) 1 : cluster [DBG] pgmap v3: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:38:38.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:38 vm00 bash[53976]: debug 2026-03-09T18:38:38.096+0000 7f7a7fdc3140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T18:38:38.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:38 vm00 bash[53976]: debug 2026-03-09T18:38:38.140+0000 7f7a7fdc3140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T18:38:38.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:38 vm00 bash[53976]: debug 2026-03-09T18:38:38.176+0000 7f7a7fdc3140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T18:38:38.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:38 vm00 bash[53976]: debug 2026-03-09T18:38:38.292+0000 7f7a7fdc3140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:38:38.715 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:38 vm00 bash[53976]: debug 2026-03-09T18:38:38.452+0000 7f7a7fdc3140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T18:38:38.715 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:38 vm00 bash[53976]: debug 2026-03-09T18:38:38.628+0000 7f7a7fdc3140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T18:38:38.715 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:38 vm00 bash[53976]: debug 2026-03-09T18:38:38.672+0000 7f7a7fdc3140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T18:38:39.116 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:38 vm00 bash[22468]: cluster 2026-03-09T18:38:37.839704+0000 mon.a (mon.0) 992 : cluster [DBG] mgrmap e35: x(active, since 2s) 2026-03-09T18:38:39.116 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:38 vm00 bash[22468]: cephadm 2026-03-09T18:38:37.839857+0000 mgr.x (mgr.24833) 3 : cephadm [INF] [09/Mar/2026:18:38:37] ENGINE Bus STARTING 2026-03-09T18:38:39.116 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:38 vm00 bash[17468]: cluster 2026-03-09T18:38:37.839704+0000 mon.a (mon.0) 992 : cluster [DBG] mgrmap e35: x(active, since 2s) 2026-03-09T18:38:39.116 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:38 vm00 bash[17468]: cephadm 2026-03-09T18:38:37.839857+0000 mgr.x (mgr.24833) 3 : cephadm [INF] [09/Mar/2026:18:38:37] ENGINE Bus STARTING 2026-03-09T18:38:39.116 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:38 vm00 bash[53976]: debug 2026-03-09T18:38:38.716+0000 7f7a7fdc3140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T18:38:39.116 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:38 vm00 bash[53976]: debug 2026-03-09T18:38:38.876+0000 7f7a7fdc3140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:38:39.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:38 vm08 bash[17774]: cluster 2026-03-09T18:38:37.839704+0000 mon.a (mon.0) 992 : cluster [DBG] mgrmap e35: x(active, since 2s) 2026-03-09T18:38:39.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:38 vm08 bash[17774]: cephadm 2026-03-09T18:38:37.839857+0000 mgr.x (mgr.24833) 3 : cephadm [INF] [09/Mar/2026:18:38:37] ENGINE Bus STARTING 2026-03-09T18:38:39.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:39 vm00 bash[53976]: debug 2026-03-09T18:38:39.115+0000 7f7a7fdc3140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T18:38:39.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:39 vm00 bash[53976]: [09/Mar/2026:18:38:39] ENGINE Bus STARTING 2026-03-09T18:38:39.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:39 vm00 bash[53976]: CherryPy Checker: 2026-03-09T18:38:39.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:39 vm00 bash[53976]: The Application mounted at '' has an empty config. 2026-03-09T18:38:39.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:39 vm00 bash[53976]: [09/Mar/2026:18:38:39] ENGINE Serving on http://:::9283 2026-03-09T18:38:39.378 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:39 vm00 bash[53976]: [09/Mar/2026:18:38:39] ENGINE Bus STARTED 2026-03-09T18:38:39.832 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:38:39 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:38:39] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.51.0" 2026-03-09T18:38:40.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:39 vm00 bash[22468]: cephadm 2026-03-09T18:38:37.947399+0000 mgr.x (mgr.24833) 4 : cephadm [INF] [09/Mar/2026:18:38:37] ENGINE Serving on https://192.168.123.108:7150 2026-03-09T18:38:40.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:39 vm00 bash[22468]: cephadm 2026-03-09T18:38:37.947839+0000 mgr.x (mgr.24833) 5 : cephadm [INF] [09/Mar/2026:18:38:37] ENGINE Client ('192.168.123.108', 33478) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:38:40.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:39 vm00 bash[22468]: cephadm 2026-03-09T18:38:38.048701+0000 mgr.x (mgr.24833) 6 : cephadm [INF] [09/Mar/2026:18:38:38] ENGINE Serving on http://192.168.123.108:8765 2026-03-09T18:38:40.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:39 vm00 bash[22468]: cephadm 2026-03-09T18:38:38.048850+0000 mgr.x (mgr.24833) 7 : cephadm [INF] [09/Mar/2026:18:38:38] ENGINE Bus STARTED 2026-03-09T18:38:40.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:39 vm00 bash[22468]: audit 2026-03-09T18:38:39.124418+0000 mon.c (mon.1) 145 : audit [DBG] from='mgr.? 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-09T18:38:40.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:39 vm00 bash[22468]: cluster 2026-03-09T18:38:39.125156+0000 mon.a (mon.0) 993 : cluster [DBG] Standby manager daemon y started 2026-03-09T18:38:40.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:39 vm00 bash[22468]: audit 2026-03-09T18:38:39.125975+0000 mon.c (mon.1) 146 : audit [DBG] from='mgr.? 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:38:40.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:39 vm00 bash[22468]: audit 2026-03-09T18:38:39.126785+0000 mon.c (mon.1) 147 : audit [DBG] from='mgr.? 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-09T18:38:40.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:39 vm00 bash[22468]: audit 2026-03-09T18:38:39.127199+0000 mon.c (mon.1) 148 : audit [DBG] from='mgr.? 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:38:40.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:39 vm00 bash[17468]: cephadm 2026-03-09T18:38:37.947399+0000 mgr.x (mgr.24833) 4 : cephadm [INF] [09/Mar/2026:18:38:37] ENGINE Serving on https://192.168.123.108:7150 2026-03-09T18:38:40.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:39 vm00 bash[17468]: cephadm 2026-03-09T18:38:37.947839+0000 mgr.x (mgr.24833) 5 : cephadm [INF] [09/Mar/2026:18:38:37] ENGINE Client ('192.168.123.108', 33478) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:38:40.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:39 vm00 bash[17468]: cephadm 2026-03-09T18:38:38.048701+0000 mgr.x (mgr.24833) 6 : cephadm [INF] [09/Mar/2026:18:38:38] ENGINE Serving on http://192.168.123.108:8765 2026-03-09T18:38:40.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:39 vm00 bash[17468]: cephadm 2026-03-09T18:38:38.048850+0000 mgr.x (mgr.24833) 7 : cephadm [INF] [09/Mar/2026:18:38:38] ENGINE Bus STARTED 2026-03-09T18:38:40.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:39 vm00 bash[17468]: audit 2026-03-09T18:38:39.124418+0000 mon.c (mon.1) 145 : audit [DBG] from='mgr.? 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-09T18:38:40.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:39 vm00 bash[17468]: cluster 2026-03-09T18:38:39.125156+0000 mon.a (mon.0) 993 : cluster [DBG] Standby manager daemon y started 2026-03-09T18:38:40.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:39 vm00 bash[17468]: audit 2026-03-09T18:38:39.125975+0000 mon.c (mon.1) 146 : audit [DBG] from='mgr.? 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:38:40.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:39 vm00 bash[17468]: audit 2026-03-09T18:38:39.126785+0000 mon.c (mon.1) 147 : audit [DBG] from='mgr.? 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-09T18:38:40.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:39 vm00 bash[17468]: audit 2026-03-09T18:38:39.127199+0000 mon.c (mon.1) 148 : audit [DBG] from='mgr.? 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:38:40.206 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:39 vm08 bash[17774]: cephadm 2026-03-09T18:38:37.947399+0000 mgr.x (mgr.24833) 4 : cephadm [INF] [09/Mar/2026:18:38:37] ENGINE Serving on https://192.168.123.108:7150 2026-03-09T18:38:40.206 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:39 vm08 bash[17774]: cephadm 2026-03-09T18:38:37.947839+0000 mgr.x (mgr.24833) 5 : cephadm [INF] [09/Mar/2026:18:38:37] ENGINE Client ('192.168.123.108', 33478) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:38:40.206 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:39 vm08 bash[17774]: cephadm 2026-03-09T18:38:38.048701+0000 mgr.x (mgr.24833) 6 : cephadm [INF] [09/Mar/2026:18:38:38] ENGINE Serving on http://192.168.123.108:8765 2026-03-09T18:38:40.206 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:39 vm08 bash[17774]: cephadm 2026-03-09T18:38:38.048850+0000 mgr.x (mgr.24833) 7 : cephadm [INF] [09/Mar/2026:18:38:38] ENGINE Bus STARTED 2026-03-09T18:38:40.206 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:39 vm08 bash[17774]: audit 2026-03-09T18:38:39.124418+0000 mon.c (mon.1) 145 : audit [DBG] from='mgr.? 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-09T18:38:40.206 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:39 vm08 bash[17774]: cluster 2026-03-09T18:38:39.125156+0000 mon.a (mon.0) 993 : cluster [DBG] Standby manager daemon y started 2026-03-09T18:38:40.206 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:39 vm08 bash[17774]: audit 2026-03-09T18:38:39.125975+0000 mon.c (mon.1) 146 : audit [DBG] from='mgr.? 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:38:40.206 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:39 vm08 bash[17774]: audit 2026-03-09T18:38:39.126785+0000 mon.c (mon.1) 147 : audit [DBG] from='mgr.? 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-09T18:38:40.206 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:39 vm08 bash[17774]: audit 2026-03-09T18:38:39.127199+0000 mon.c (mon.1) 148 : audit [DBG] from='mgr.? 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:38:40.474 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:40 vm08 bash[40744]: ts=2026-03-09T18:38:40.206Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph-exporter msg="Unable to refresh target groups" err="Get \"http://192.168.123.100:8765/sd/prometheus/sd-config?service=ceph-exporter\": dial tcp 192.168.123.100:8765: connect: connection refused" 2026-03-09T18:38:40.474 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:40 vm08 bash[40744]: ts=2026-03-09T18:38:40.206Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=node msg="Unable to refresh target groups" err="Get \"http://192.168.123.100:8765/sd/prometheus/sd-config?service=node-exporter\": dial tcp 192.168.123.100:8765: connect: connection refused" 2026-03-09T18:38:40.474 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:40 vm08 bash[40744]: ts=2026-03-09T18:38:40.206Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nvmeof msg="Unable to refresh target groups" err="Get \"http://192.168.123.100:8765/sd/prometheus/sd-config?service=nvmeof\": dial tcp 192.168.123.100:8765: connect: connection refused" 2026-03-09T18:38:40.474 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:40 vm08 bash[40744]: ts=2026-03-09T18:38:40.208Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph msg="Unable to refresh target groups" err="Get \"http://192.168.123.100:8765/sd/prometheus/sd-config?service=mgr-prometheus\": dial tcp 192.168.123.100:8765: connect: connection refused" 2026-03-09T18:38:40.474 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:40 vm08 bash[40744]: ts=2026-03-09T18:38:40.208Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nfs msg="Unable to refresh target groups" err="Get \"http://192.168.123.100:8765/sd/prometheus/sd-config?service=nfs\": dial tcp 192.168.123.100:8765: connect: connection refused" 2026-03-09T18:38:40.474 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:40 vm08 bash[40744]: ts=2026-03-09T18:38:40.208Z caller=refresh.go:90 level=error component="discovery manager notify" discovery=http config=config-0 msg="Unable to refresh target groups" err="Get \"http://192.168.123.100:8765/sd/prometheus/sd-config?service=alertmanager\": dial tcp 192.168.123.100:8765: connect: connection refused" 2026-03-09T18:38:41.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:40 vm00 bash[22468]: audit 2026-03-09T18:38:39.726263+0000 mgr.x (mgr.24833) 8 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:38:41.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:40 vm00 bash[22468]: cluster 2026-03-09T18:38:39.787256+0000 mgr.x (mgr.24833) 9 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:38:41.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:40 vm00 bash[22468]: cluster 2026-03-09T18:38:39.845191+0000 mon.a (mon.0) 994 : cluster [DBG] mgrmap e36: x(active, since 4s), standbys: y 2026-03-09T18:38:41.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:40 vm00 bash[22468]: audit 2026-03-09T18:38:39.848982+0000 mon.a (mon.0) 995 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:38:41.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:40 vm00 bash[17468]: audit 2026-03-09T18:38:39.726263+0000 mgr.x (mgr.24833) 8 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:38:41.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:40 vm00 bash[17468]: cluster 2026-03-09T18:38:39.787256+0000 mgr.x (mgr.24833) 9 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:38:41.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:40 vm00 bash[17468]: cluster 2026-03-09T18:38:39.845191+0000 mon.a (mon.0) 994 : cluster [DBG] mgrmap e36: x(active, since 4s), standbys: y 2026-03-09T18:38:41.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:40 vm00 bash[17468]: audit 2026-03-09T18:38:39.848982+0000 mon.a (mon.0) 995 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:38:41.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:40 vm08 bash[17774]: audit 2026-03-09T18:38:39.726263+0000 mgr.x (mgr.24833) 8 : audit [DBG] from='client.15072 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:38:41.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:40 vm08 bash[17774]: cluster 2026-03-09T18:38:39.787256+0000 mgr.x (mgr.24833) 9 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:38:41.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:40 vm08 bash[17774]: cluster 2026-03-09T18:38:39.845191+0000 mon.a (mon.0) 994 : cluster [DBG] mgrmap e36: x(active, since 4s), standbys: y 2026-03-09T18:38:41.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:40 vm08 bash[17774]: audit 2026-03-09T18:38:39.848982+0000 mon.a (mon.0) 995 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:38:42.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:42 vm00 bash[17468]: cluster 2026-03-09T18:38:41.787599+0000 mgr.x (mgr.24833) 10 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:38:42.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:42 vm00 bash[17468]: audit 2026-03-09T18:38:42.101754+0000 mon.a (mon.0) 996 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:42.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:42 vm00 bash[17468]: audit 2026-03-09T18:38:42.110686+0000 mon.a (mon.0) 997 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:42.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:42 vm00 bash[17468]: audit 2026-03-09T18:38:42.138373+0000 mon.a (mon.0) 998 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:42.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:42 vm00 bash[17468]: audit 2026-03-09T18:38:42.147025+0000 mon.a (mon.0) 999 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:42.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:42 vm00 bash[22468]: cluster 2026-03-09T18:38:41.787599+0000 mgr.x (mgr.24833) 10 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:38:42.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:42 vm00 bash[22468]: audit 2026-03-09T18:38:42.101754+0000 mon.a (mon.0) 996 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:42.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:42 vm00 bash[22468]: audit 2026-03-09T18:38:42.110686+0000 mon.a (mon.0) 997 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:42.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:42 vm00 bash[22468]: audit 2026-03-09T18:38:42.138373+0000 mon.a (mon.0) 998 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:42.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:42 vm00 bash[22468]: audit 2026-03-09T18:38:42.147025+0000 mon.a (mon.0) 999 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:42.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:42 vm08 bash[17774]: cluster 2026-03-09T18:38:41.787599+0000 mgr.x (mgr.24833) 10 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:38:42.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:42 vm08 bash[17774]: audit 2026-03-09T18:38:42.101754+0000 mon.a (mon.0) 996 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:42.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:42 vm08 bash[17774]: audit 2026-03-09T18:38:42.110686+0000 mon.a (mon.0) 997 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:42.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:42 vm08 bash[17774]: audit 2026-03-09T18:38:42.138373+0000 mon.a (mon.0) 998 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:42.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:42 vm08 bash[17774]: audit 2026-03-09T18:38:42.147025+0000 mon.a (mon.0) 999 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:43 vm00 bash[22468]: audit 2026-03-09T18:38:42.713520+0000 mon.a (mon.0) 1000 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:43 vm00 bash[22468]: audit 2026-03-09T18:38:42.719764+0000 mon.a (mon.0) 1001 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:43 vm00 bash[22468]: audit 2026-03-09T18:38:42.721192+0000 mon.a (mon.0) 1002 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:38:43.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:43 vm00 bash[22468]: audit 2026-03-09T18:38:42.741927+0000 mon.a (mon.0) 1003 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:43 vm00 bash[22468]: cephadm 2026-03-09T18:38:42.747915+0000 mgr.x (mgr.24833) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:38:43.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:43 vm00 bash[22468]: audit 2026-03-09T18:38:42.748008+0000 mon.a (mon.0) 1004 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:43 vm00 bash[22468]: cephadm 2026-03-09T18:38:42.748025+0000 mgr.x (mgr.24833) 12 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T18:38:43.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:43 vm00 bash[22468]: audit 2026-03-09T18:38:42.748997+0000 mon.a (mon.0) 1005 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:38:43.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:43 vm00 bash[22468]: audit 2026-03-09T18:38:42.749970+0000 mon.a (mon.0) 1006 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:38:43.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:43 vm00 bash[22468]: audit 2026-03-09T18:38:42.750520+0000 mon.a (mon.0) 1007 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:38:43.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:43 vm00 bash[22468]: cephadm 2026-03-09T18:38:42.788690+0000 mgr.x (mgr.24833) 13 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:43 vm00 bash[22468]: cephadm 2026-03-09T18:38:42.791068+0000 mgr.x (mgr.24833) 14 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:43 vm00 bash[22468]: cephadm 2026-03-09T18:38:42.819279+0000 mgr.x (mgr.24833) 15 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:43 vm00 bash[22468]: cephadm 2026-03-09T18:38:42.825669+0000 mgr.x (mgr.24833) 16 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:43 vm00 bash[22468]: cephadm 2026-03-09T18:38:42.851512+0000 mgr.x (mgr.24833) 17 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:43 vm00 bash[22468]: cephadm 2026-03-09T18:38:42.857870+0000 mgr.x (mgr.24833) 18 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:43 vm00 bash[22468]: audit 2026-03-09T18:38:42.895890+0000 mon.a (mon.0) 1008 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:43 vm00 bash[22468]: audit 2026-03-09T18:38:42.906083+0000 mon.a (mon.0) 1009 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:43 vm00 bash[22468]: audit 2026-03-09T18:38:42.913056+0000 mon.a (mon.0) 1010 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:43 vm00 bash[22468]: audit 2026-03-09T18:38:42.921369+0000 mon.a (mon.0) 1011 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:43 vm00 bash[22468]: audit 2026-03-09T18:38:42.928786+0000 mon.a (mon.0) 1012 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:43 vm00 bash[22468]: audit 2026-03-09T18:38:42.941350+0000 mon.a (mon.0) 1013 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:43 vm00 bash[22468]: audit 2026-03-09T18:38:42.944594+0000 mon.a (mon.0) 1014 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:43 vm00 bash[22468]: audit 2026-03-09T18:38:43.444559+0000 mon.a (mon.0) 1015 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:43 vm00 bash[22468]: audit 2026-03-09T18:38:43.451252+0000 mon.a (mon.0) 1016 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:43 vm00 bash[17468]: audit 2026-03-09T18:38:42.713520+0000 mon.a (mon.0) 1000 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:43 vm00 bash[17468]: audit 2026-03-09T18:38:42.719764+0000 mon.a (mon.0) 1001 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:43 vm00 bash[17468]: audit 2026-03-09T18:38:42.721192+0000 mon.a (mon.0) 1002 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:43 vm00 bash[17468]: audit 2026-03-09T18:38:42.741927+0000 mon.a (mon.0) 1003 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:43 vm00 bash[17468]: cephadm 2026-03-09T18:38:42.747915+0000 mgr.x (mgr.24833) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:43 vm00 bash[17468]: audit 2026-03-09T18:38:42.748008+0000 mon.a (mon.0) 1004 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:43 vm00 bash[17468]: cephadm 2026-03-09T18:38:42.748025+0000 mgr.x (mgr.24833) 12 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:43 vm00 bash[17468]: audit 2026-03-09T18:38:42.748997+0000 mon.a (mon.0) 1005 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:43 vm00 bash[17468]: audit 2026-03-09T18:38:42.749970+0000 mon.a (mon.0) 1006 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:43 vm00 bash[17468]: audit 2026-03-09T18:38:42.750520+0000 mon.a (mon.0) 1007 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:43 vm00 bash[17468]: cephadm 2026-03-09T18:38:42.788690+0000 mgr.x (mgr.24833) 13 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:43 vm00 bash[17468]: cephadm 2026-03-09T18:38:42.791068+0000 mgr.x (mgr.24833) 14 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:43 vm00 bash[17468]: cephadm 2026-03-09T18:38:42.819279+0000 mgr.x (mgr.24833) 15 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:43 vm00 bash[17468]: cephadm 2026-03-09T18:38:42.825669+0000 mgr.x (mgr.24833) 16 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:43 vm00 bash[17468]: cephadm 2026-03-09T18:38:42.851512+0000 mgr.x (mgr.24833) 17 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:43 vm00 bash[17468]: cephadm 2026-03-09T18:38:42.857870+0000 mgr.x (mgr.24833) 18 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:43 vm00 bash[17468]: audit 2026-03-09T18:38:42.895890+0000 mon.a (mon.0) 1008 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:43 vm00 bash[17468]: audit 2026-03-09T18:38:42.906083+0000 mon.a (mon.0) 1009 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:43 vm00 bash[17468]: audit 2026-03-09T18:38:42.913056+0000 mon.a (mon.0) 1010 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:43 vm00 bash[17468]: audit 2026-03-09T18:38:42.921369+0000 mon.a (mon.0) 1011 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:43 vm00 bash[17468]: audit 2026-03-09T18:38:42.928786+0000 mon.a (mon.0) 1012 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:43 vm00 bash[17468]: audit 2026-03-09T18:38:42.941350+0000 mon.a (mon.0) 1013 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:43 vm00 bash[17468]: audit 2026-03-09T18:38:42.944594+0000 mon.a (mon.0) 1014 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:43 vm00 bash[17468]: audit 2026-03-09T18:38:43.444559+0000 mon.a (mon.0) 1015 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:43 vm00 bash[17468]: audit 2026-03-09T18:38:43.451252+0000 mon.a (mon.0) 1016 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:43 vm08 bash[17774]: audit 2026-03-09T18:38:42.713520+0000 mon.a (mon.0) 1000 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:43 vm08 bash[17774]: audit 2026-03-09T18:38:42.719764+0000 mon.a (mon.0) 1001 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:43 vm08 bash[17774]: audit 2026-03-09T18:38:42.721192+0000 mon.a (mon.0) 1002 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:38:43.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:43 vm08 bash[17774]: audit 2026-03-09T18:38:42.741927+0000 mon.a (mon.0) 1003 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:43 vm08 bash[17774]: cephadm 2026-03-09T18:38:42.747915+0000 mgr.x (mgr.24833) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:38:43.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:43 vm08 bash[17774]: audit 2026-03-09T18:38:42.748008+0000 mon.a (mon.0) 1004 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:43 vm08 bash[17774]: cephadm 2026-03-09T18:38:42.748025+0000 mgr.x (mgr.24833) 12 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T18:38:43.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:43 vm08 bash[17774]: audit 2026-03-09T18:38:42.748997+0000 mon.a (mon.0) 1005 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:38:43.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:43 vm08 bash[17774]: audit 2026-03-09T18:38:42.749970+0000 mon.a (mon.0) 1006 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:38:43.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:43 vm08 bash[17774]: audit 2026-03-09T18:38:42.750520+0000 mon.a (mon.0) 1007 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:38:43.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:43 vm08 bash[17774]: cephadm 2026-03-09T18:38:42.788690+0000 mgr.x (mgr.24833) 13 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:38:43.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:43 vm08 bash[17774]: cephadm 2026-03-09T18:38:42.791068+0000 mgr.x (mgr.24833) 14 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:38:43.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:43 vm08 bash[17774]: cephadm 2026-03-09T18:38:42.819279+0000 mgr.x (mgr.24833) 15 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:38:43.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:43 vm08 bash[17774]: cephadm 2026-03-09T18:38:42.825669+0000 mgr.x (mgr.24833) 16 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:38:43.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:43 vm08 bash[17774]: cephadm 2026-03-09T18:38:42.851512+0000 mgr.x (mgr.24833) 17 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:38:43.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:43 vm08 bash[17774]: cephadm 2026-03-09T18:38:42.857870+0000 mgr.x (mgr.24833) 18 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:38:43.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:43 vm08 bash[17774]: audit 2026-03-09T18:38:42.895890+0000 mon.a (mon.0) 1008 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:43 vm08 bash[17774]: audit 2026-03-09T18:38:42.906083+0000 mon.a (mon.0) 1009 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:43 vm08 bash[17774]: audit 2026-03-09T18:38:42.913056+0000 mon.a (mon.0) 1010 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:43 vm08 bash[17774]: audit 2026-03-09T18:38:42.921369+0000 mon.a (mon.0) 1011 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:43 vm08 bash[17774]: audit 2026-03-09T18:38:42.928786+0000 mon.a (mon.0) 1012 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:43 vm08 bash[17774]: audit 2026-03-09T18:38:42.941350+0000 mon.a (mon.0) 1013 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:38:43.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:43 vm08 bash[17774]: audit 2026-03-09T18:38:42.944594+0000 mon.a (mon.0) 1014 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:38:43.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:43 vm08 bash[17774]: audit 2026-03-09T18:38:43.444559+0000 mon.a (mon.0) 1015 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:43.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:43 vm08 bash[17774]: audit 2026-03-09T18:38:43.451252+0000 mon.a (mon.0) 1016 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:44.254 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 systemd[1]: Stopping Ceph prometheus.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:38:44.254 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[40744]: ts=2026-03-09T18:38:44.092Z caller=main.go:964 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-09T18:38:44.254 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[40744]: ts=2026-03-09T18:38:44.092Z caller=main.go:988 level=info msg="Stopping scrape discovery manager..." 2026-03-09T18:38:44.254 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[40744]: ts=2026-03-09T18:38:44.092Z caller=main.go:1002 level=info msg="Stopping notify discovery manager..." 2026-03-09T18:38:44.254 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[40744]: ts=2026-03-09T18:38:44.092Z caller=manager.go:177 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-09T18:38:44.254 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[40744]: ts=2026-03-09T18:38:44.092Z caller=manager.go:187 level=info component="rule manager" msg="Rule manager stopped" 2026-03-09T18:38:44.254 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[40744]: ts=2026-03-09T18:38:44.092Z caller=main.go:1039 level=info msg="Stopping scrape manager..." 2026-03-09T18:38:44.254 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[40744]: ts=2026-03-09T18:38:44.092Z caller=main.go:984 level=info msg="Scrape discovery manager stopped" 2026-03-09T18:38:44.254 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[40744]: ts=2026-03-09T18:38:44.092Z caller=main.go:998 level=info msg="Notify discovery manager stopped" 2026-03-09T18:38:44.254 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[40744]: ts=2026-03-09T18:38:44.092Z caller=main.go:1031 level=info msg="Scrape manager stopped" 2026-03-09T18:38:44.254 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[40744]: ts=2026-03-09T18:38:44.094Z caller=notifier.go:618 level=info component=notifier msg="Stopping notification manager..." 2026-03-09T18:38:44.255 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[40744]: ts=2026-03-09T18:38:44.094Z caller=main.go:1261 level=info msg="Notifier manager stopped" 2026-03-09T18:38:44.255 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[40744]: ts=2026-03-09T18:38:44.094Z caller=main.go:1273 level=info msg="See you next time!" 2026-03-09T18:38:44.255 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[41935]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-prometheus-a 2026-03-09T18:38:44.255 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@prometheus.a.service: Deactivated successfully. 2026-03-09T18:38:44.255 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 systemd[1]: Stopped Ceph prometheus.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:38:44.255 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 systemd[1]: Started Ceph prometheus.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:38:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[42014]: ts=2026-03-09T18:38:44.335Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-09T18:38:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[42014]: ts=2026-03-09T18:38:44.335Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-09T18:38:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[42014]: ts=2026-03-09T18:38:44.335Z caller=main.go:623 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm08 (none))" 2026-03-09T18:38:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[42014]: ts=2026-03-09T18:38:44.335Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-09T18:38:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[42014]: ts=2026-03-09T18:38:44.335Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-09T18:38:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[42014]: ts=2026-03-09T18:38:44.336Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-09T18:38:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[42014]: ts=2026-03-09T18:38:44.337Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-09T18:38:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[42014]: ts=2026-03-09T18:38:44.338Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-09T18:38:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[42014]: ts=2026-03-09T18:38:44.338Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-09T18:38:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[42014]: ts=2026-03-09T18:38:44.343Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-09T18:38:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[42014]: ts=2026-03-09T18:38:44.344Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.986µs 2026-03-09T18:38:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[42014]: ts=2026-03-09T18:38:44.344Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-09T18:38:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[42014]: ts=2026-03-09T18:38:44.349Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=4 2026-03-09T18:38:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[42014]: ts=2026-03-09T18:38:44.367Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=4 2026-03-09T18:38:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[42014]: ts=2026-03-09T18:38:44.381Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=2 maxSegment=4 2026-03-09T18:38:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[42014]: ts=2026-03-09T18:38:44.386Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=3 maxSegment=4 2026-03-09T18:38:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[42014]: ts=2026-03-09T18:38:44.386Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=4 maxSegment=4 2026-03-09T18:38:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[42014]: ts=2026-03-09T18:38:44.386Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=33.322µs wal_replay_duration=42.59478ms wbl_replay_duration=120ns total_replay_duration=42.666804ms 2026-03-09T18:38:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[42014]: ts=2026-03-09T18:38:44.389Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-09T18:38:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[42014]: ts=2026-03-09T18:38:44.389Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-09T18:38:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[42014]: ts=2026-03-09T18:38:44.389Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-09T18:38:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[42014]: ts=2026-03-09T18:38:44.404Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=14.82547ms db_storage=982ns remote_storage=1.453µs web_handler=621ns query_engine=1.262µs scrape=2.47904ms scrape_sd=81.042µs notify=6.683µs notify_sd=5.971µs rules=11.819804ms tracing=5µs 2026-03-09T18:38:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[42014]: ts=2026-03-09T18:38:44.404Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-09T18:38:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:38:44 vm08 bash[42014]: ts=2026-03-09T18:38:44.404Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-09T18:38:45.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:44 vm00 bash[22468]: cephadm 2026-03-09T18:38:42.937499+0000 mgr.x (mgr.24833) 19 : cephadm [INF] Reconfiguring iscsi.foo.vm00.ywhulq (dependencies changed)... 2026-03-09T18:38:45.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:44 vm00 bash[22468]: cephadm 2026-03-09T18:38:42.941857+0000 mgr.x (mgr.24833) 20 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm00.ywhulq on vm00 2026-03-09T18:38:45.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:44 vm00 bash[22468]: cephadm 2026-03-09T18:38:43.448907+0000 mgr.x (mgr.24833) 21 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T18:38:45.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:44 vm00 bash[22468]: cephadm 2026-03-09T18:38:43.603423+0000 mgr.x (mgr.24833) 22 : cephadm [INF] Reconfiguring daemon prometheus.a on vm08 2026-03-09T18:38:45.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:44 vm00 bash[22468]: cluster 2026-03-09T18:38:43.788140+0000 mgr.x (mgr.24833) 23 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T18:38:45.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:44 vm00 bash[22468]: audit 2026-03-09T18:38:43.925510+0000 mon.a (mon.0) 1017 : audit [DBG] from='client.? 192.168.123.100:0/3627477000' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T18:38:45.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:44 vm00 bash[22468]: audit 2026-03-09T18:38:44.124580+0000 mon.c (mon.1) 149 : audit [INF] from='client.? 192.168.123.100:0/2928612452' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2159152617"}]: dispatch 2026-03-09T18:38:45.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:44 vm00 bash[22468]: audit 2026-03-09T18:38:44.124918+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2159152617"}]: dispatch 2026-03-09T18:38:45.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:44 vm00 bash[22468]: audit 2026-03-09T18:38:44.207900+0000 mon.a (mon.0) 1019 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:45.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:44 vm00 bash[22468]: audit 2026-03-09T18:38:44.217006+0000 mon.a (mon.0) 1020 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:45.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:44 vm00 bash[22468]: audit 2026-03-09T18:38:44.222845+0000 mon.a (mon.0) 1021 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:38:45.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:44 vm00 bash[22468]: audit 2026-03-09T18:38:44.225551+0000 mon.a (mon.0) 1022 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:38:45.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:44 vm00 bash[22468]: audit 2026-03-09T18:38:44.228153+0000 mon.a (mon.0) 1023 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:38:45.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:44 vm00 bash[22468]: audit 2026-03-09T18:38:44.275819+0000 mon.a (mon.0) 1024 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:38:45.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:44 vm00 bash[17468]: cephadm 2026-03-09T18:38:42.937499+0000 mgr.x (mgr.24833) 19 : cephadm [INF] Reconfiguring iscsi.foo.vm00.ywhulq (dependencies changed)... 2026-03-09T18:38:45.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:44 vm00 bash[17468]: cephadm 2026-03-09T18:38:42.941857+0000 mgr.x (mgr.24833) 20 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm00.ywhulq on vm00 2026-03-09T18:38:45.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:44 vm00 bash[17468]: cephadm 2026-03-09T18:38:43.448907+0000 mgr.x (mgr.24833) 21 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T18:38:45.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:44 vm00 bash[17468]: cephadm 2026-03-09T18:38:43.603423+0000 mgr.x (mgr.24833) 22 : cephadm [INF] Reconfiguring daemon prometheus.a on vm08 2026-03-09T18:38:45.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:44 vm00 bash[17468]: cluster 2026-03-09T18:38:43.788140+0000 mgr.x (mgr.24833) 23 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T18:38:45.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:44 vm00 bash[17468]: audit 2026-03-09T18:38:43.925510+0000 mon.a (mon.0) 1017 : audit [DBG] from='client.? 192.168.123.100:0/3627477000' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T18:38:45.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:44 vm00 bash[17468]: audit 2026-03-09T18:38:44.124580+0000 mon.c (mon.1) 149 : audit [INF] from='client.? 192.168.123.100:0/2928612452' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2159152617"}]: dispatch 2026-03-09T18:38:45.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:44 vm00 bash[17468]: audit 2026-03-09T18:38:44.124918+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2159152617"}]: dispatch 2026-03-09T18:38:45.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:44 vm00 bash[17468]: audit 2026-03-09T18:38:44.207900+0000 mon.a (mon.0) 1019 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:45.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:44 vm00 bash[17468]: audit 2026-03-09T18:38:44.217006+0000 mon.a (mon.0) 1020 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:45.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:44 vm00 bash[17468]: audit 2026-03-09T18:38:44.222845+0000 mon.a (mon.0) 1021 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:38:45.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:44 vm00 bash[17468]: audit 2026-03-09T18:38:44.225551+0000 mon.a (mon.0) 1022 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:38:45.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:44 vm00 bash[17468]: audit 2026-03-09T18:38:44.228153+0000 mon.a (mon.0) 1023 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:38:45.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:44 vm00 bash[17468]: audit 2026-03-09T18:38:44.275819+0000 mon.a (mon.0) 1024 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:38:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:44 vm08 bash[17774]: cephadm 2026-03-09T18:38:42.937499+0000 mgr.x (mgr.24833) 19 : cephadm [INF] Reconfiguring iscsi.foo.vm00.ywhulq (dependencies changed)... 2026-03-09T18:38:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:44 vm08 bash[17774]: cephadm 2026-03-09T18:38:42.941857+0000 mgr.x (mgr.24833) 20 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm00.ywhulq on vm00 2026-03-09T18:38:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:44 vm08 bash[17774]: cephadm 2026-03-09T18:38:43.448907+0000 mgr.x (mgr.24833) 21 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T18:38:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:44 vm08 bash[17774]: cephadm 2026-03-09T18:38:43.603423+0000 mgr.x (mgr.24833) 22 : cephadm [INF] Reconfiguring daemon prometheus.a on vm08 2026-03-09T18:38:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:44 vm08 bash[17774]: cluster 2026-03-09T18:38:43.788140+0000 mgr.x (mgr.24833) 23 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T18:38:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:44 vm08 bash[17774]: audit 2026-03-09T18:38:43.925510+0000 mon.a (mon.0) 1017 : audit [DBG] from='client.? 192.168.123.100:0/3627477000' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T18:38:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:44 vm08 bash[17774]: audit 2026-03-09T18:38:44.124580+0000 mon.c (mon.1) 149 : audit [INF] from='client.? 192.168.123.100:0/2928612452' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2159152617"}]: dispatch 2026-03-09T18:38:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:44 vm08 bash[17774]: audit 2026-03-09T18:38:44.124918+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2159152617"}]: dispatch 2026-03-09T18:38:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:44 vm08 bash[17774]: audit 2026-03-09T18:38:44.207900+0000 mon.a (mon.0) 1019 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:44 vm08 bash[17774]: audit 2026-03-09T18:38:44.217006+0000 mon.a (mon.0) 1020 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:44 vm08 bash[17774]: audit 2026-03-09T18:38:44.222845+0000 mon.a (mon.0) 1021 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:38:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:44 vm08 bash[17774]: audit 2026-03-09T18:38:44.225551+0000 mon.a (mon.0) 1022 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:38:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:44 vm08 bash[17774]: audit 2026-03-09T18:38:44.228153+0000 mon.a (mon.0) 1023 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:38:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:44 vm08 bash[17774]: audit 2026-03-09T18:38:44.275819+0000 mon.a (mon.0) 1024 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:38:45.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:45 vm08 bash[17774]: audit 2026-03-09T18:38:44.219840+0000 mgr.x (mgr.24833) 24 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:38:45.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:45 vm08 bash[17774]: audit 2026-03-09T18:38:44.222364+0000 mgr.x (mgr.24833) 25 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:38:45.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:45 vm08 bash[17774]: audit 2026-03-09T18:38:44.224928+0000 mgr.x (mgr.24833) 26 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:38:45.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:45 vm08 bash[17774]: audit 2026-03-09T18:38:44.733062+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2159152617"}]': finished 2026-03-09T18:38:45.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:45 vm08 bash[17774]: cluster 2026-03-09T18:38:44.733120+0000 mon.a (mon.0) 1026 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-09T18:38:45.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:45 vm08 bash[17774]: audit 2026-03-09T18:38:44.927275+0000 mon.c (mon.1) 150 : audit [INF] from='client.? 192.168.123.100:0/2295308588' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/588917283"}]: dispatch 2026-03-09T18:38:45.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:45 vm08 bash[17774]: audit 2026-03-09T18:38:44.927545+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/588917283"}]: dispatch 2026-03-09T18:38:46.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:45 vm00 bash[22468]: audit 2026-03-09T18:38:44.219840+0000 mgr.x (mgr.24833) 24 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:38:46.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:45 vm00 bash[22468]: audit 2026-03-09T18:38:44.222364+0000 mgr.x (mgr.24833) 25 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:38:46.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:45 vm00 bash[22468]: audit 2026-03-09T18:38:44.224928+0000 mgr.x (mgr.24833) 26 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:38:46.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:45 vm00 bash[22468]: audit 2026-03-09T18:38:44.733062+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2159152617"}]': finished 2026-03-09T18:38:46.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:45 vm00 bash[22468]: cluster 2026-03-09T18:38:44.733120+0000 mon.a (mon.0) 1026 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-09T18:38:46.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:45 vm00 bash[22468]: audit 2026-03-09T18:38:44.927275+0000 mon.c (mon.1) 150 : audit [INF] from='client.? 192.168.123.100:0/2295308588' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/588917283"}]: dispatch 2026-03-09T18:38:46.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:45 vm00 bash[22468]: audit 2026-03-09T18:38:44.927545+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/588917283"}]: dispatch 2026-03-09T18:38:46.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:45 vm00 bash[17468]: audit 2026-03-09T18:38:44.219840+0000 mgr.x (mgr.24833) 24 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:38:46.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:45 vm00 bash[17468]: audit 2026-03-09T18:38:44.222364+0000 mgr.x (mgr.24833) 25 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:38:46.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:45 vm00 bash[17468]: audit 2026-03-09T18:38:44.224928+0000 mgr.x (mgr.24833) 26 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:38:46.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:45 vm00 bash[17468]: audit 2026-03-09T18:38:44.733062+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2159152617"}]': finished 2026-03-09T18:38:46.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:45 vm00 bash[17468]: cluster 2026-03-09T18:38:44.733120+0000 mon.a (mon.0) 1026 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-09T18:38:46.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:45 vm00 bash[17468]: audit 2026-03-09T18:38:44.927275+0000 mon.c (mon.1) 150 : audit [INF] from='client.? 192.168.123.100:0/2295308588' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/588917283"}]: dispatch 2026-03-09T18:38:46.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:45 vm00 bash[17468]: audit 2026-03-09T18:38:44.927545+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/588917283"}]: dispatch 2026-03-09T18:38:47.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:46 vm00 bash[22468]: audit 2026-03-09T18:38:45.746335+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/588917283"}]': finished 2026-03-09T18:38:47.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:46 vm00 bash[22468]: cluster 2026-03-09T18:38:45.746395+0000 mon.a (mon.0) 1029 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-09T18:38:47.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:46 vm00 bash[22468]: cluster 2026-03-09T18:38:45.788413+0000 mgr.x (mgr.24833) 27 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 23 KiB/s rd, 0 B/s wr, 9 op/s 2026-03-09T18:38:47.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:46 vm00 bash[22468]: audit 2026-03-09T18:38:45.950030+0000 mon.c (mon.1) 151 : audit [INF] from='client.? 192.168.123.100:0/4210657419' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3381987243"}]: dispatch 2026-03-09T18:38:47.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:46 vm00 bash[22468]: audit 2026-03-09T18:38:45.950376+0000 mon.a (mon.0) 1030 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3381987243"}]: dispatch 2026-03-09T18:38:47.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:46 vm00 bash[17468]: audit 2026-03-09T18:38:45.746335+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/588917283"}]': finished 2026-03-09T18:38:47.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:46 vm00 bash[17468]: cluster 2026-03-09T18:38:45.746395+0000 mon.a (mon.0) 1029 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-09T18:38:47.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:46 vm00 bash[17468]: cluster 2026-03-09T18:38:45.788413+0000 mgr.x (mgr.24833) 27 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 23 KiB/s rd, 0 B/s wr, 9 op/s 2026-03-09T18:38:47.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:46 vm00 bash[17468]: audit 2026-03-09T18:38:45.950030+0000 mon.c (mon.1) 151 : audit [INF] from='client.? 192.168.123.100:0/4210657419' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3381987243"}]: dispatch 2026-03-09T18:38:47.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:46 vm00 bash[17468]: audit 2026-03-09T18:38:45.950376+0000 mon.a (mon.0) 1030 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3381987243"}]: dispatch 2026-03-09T18:38:47.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:46 vm08 bash[17774]: audit 2026-03-09T18:38:45.746335+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/588917283"}]': finished 2026-03-09T18:38:47.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:46 vm08 bash[17774]: cluster 2026-03-09T18:38:45.746395+0000 mon.a (mon.0) 1029 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-09T18:38:47.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:46 vm08 bash[17774]: cluster 2026-03-09T18:38:45.788413+0000 mgr.x (mgr.24833) 27 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 23 KiB/s rd, 0 B/s wr, 9 op/s 2026-03-09T18:38:47.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:46 vm08 bash[17774]: audit 2026-03-09T18:38:45.950030+0000 mon.c (mon.1) 151 : audit [INF] from='client.? 192.168.123.100:0/4210657419' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3381987243"}]: dispatch 2026-03-09T18:38:47.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:46 vm08 bash[17774]: audit 2026-03-09T18:38:45.950376+0000 mon.a (mon.0) 1030 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3381987243"}]: dispatch 2026-03-09T18:38:48.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:47 vm00 bash[22468]: audit 2026-03-09T18:38:46.757028+0000 mon.a (mon.0) 1031 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3381987243"}]': finished 2026-03-09T18:38:48.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:47 vm00 bash[22468]: cluster 2026-03-09T18:38:46.757253+0000 mon.a (mon.0) 1032 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-09T18:38:48.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:47 vm00 bash[22468]: audit 2026-03-09T18:38:46.952203+0000 mon.c (mon.1) 152 : audit [INF] from='client.? 192.168.123.100:0/1060830465' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/588917283"}]: dispatch 2026-03-09T18:38:48.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:47 vm00 bash[22468]: audit 2026-03-09T18:38:46.952550+0000 mon.a (mon.0) 1033 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/588917283"}]: dispatch 2026-03-09T18:38:48.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:47 vm00 bash[17468]: audit 2026-03-09T18:38:46.757028+0000 mon.a (mon.0) 1031 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3381987243"}]': finished 2026-03-09T18:38:48.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:47 vm00 bash[17468]: cluster 2026-03-09T18:38:46.757253+0000 mon.a (mon.0) 1032 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-09T18:38:48.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:47 vm00 bash[17468]: audit 2026-03-09T18:38:46.952203+0000 mon.c (mon.1) 152 : audit [INF] from='client.? 192.168.123.100:0/1060830465' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/588917283"}]: dispatch 2026-03-09T18:38:48.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:47 vm00 bash[17468]: audit 2026-03-09T18:38:46.952550+0000 mon.a (mon.0) 1033 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/588917283"}]: dispatch 2026-03-09T18:38:48.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:47 vm08 bash[17774]: audit 2026-03-09T18:38:46.757028+0000 mon.a (mon.0) 1031 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3381987243"}]': finished 2026-03-09T18:38:48.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:47 vm08 bash[17774]: cluster 2026-03-09T18:38:46.757253+0000 mon.a (mon.0) 1032 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-09T18:38:48.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:47 vm08 bash[17774]: audit 2026-03-09T18:38:46.952203+0000 mon.c (mon.1) 152 : audit [INF] from='client.? 192.168.123.100:0/1060830465' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/588917283"}]: dispatch 2026-03-09T18:38:48.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:47 vm08 bash[17774]: audit 2026-03-09T18:38:46.952550+0000 mon.a (mon.0) 1033 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/588917283"}]: dispatch 2026-03-09T18:38:49.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:48 vm00 bash[22468]: audit 2026-03-09T18:38:47.757355+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/588917283"}]': finished 2026-03-09T18:38:49.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:48 vm00 bash[22468]: cluster 2026-03-09T18:38:47.757387+0000 mon.a (mon.0) 1035 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-09T18:38:49.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:48 vm00 bash[22468]: cluster 2026-03-09T18:38:47.788653+0000 mgr.x (mgr.24833) 28 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T18:38:49.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:48 vm00 bash[22468]: audit 2026-03-09T18:38:47.953743+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? 192.168.123.100:0/2838742830' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/486507578"}]: dispatch 2026-03-09T18:38:49.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:48 vm00 bash[17468]: audit 2026-03-09T18:38:47.757355+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/588917283"}]': finished 2026-03-09T18:38:49.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:48 vm00 bash[17468]: cluster 2026-03-09T18:38:47.757387+0000 mon.a (mon.0) 1035 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-09T18:38:49.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:48 vm00 bash[17468]: cluster 2026-03-09T18:38:47.788653+0000 mgr.x (mgr.24833) 28 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T18:38:49.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:48 vm00 bash[17468]: audit 2026-03-09T18:38:47.953743+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? 192.168.123.100:0/2838742830' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/486507578"}]: dispatch 2026-03-09T18:38:49.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:48 vm08 bash[17774]: audit 2026-03-09T18:38:47.757355+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/588917283"}]': finished 2026-03-09T18:38:49.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:48 vm08 bash[17774]: cluster 2026-03-09T18:38:47.757387+0000 mon.a (mon.0) 1035 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-09T18:38:49.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:48 vm08 bash[17774]: cluster 2026-03-09T18:38:47.788653+0000 mgr.x (mgr.24833) 28 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T18:38:49.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:48 vm08 bash[17774]: audit 2026-03-09T18:38:47.953743+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? 192.168.123.100:0/2838742830' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/486507578"}]: dispatch 2026-03-09T18:38:50.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:49 vm00 bash[22468]: audit 2026-03-09T18:38:48.775618+0000 mon.a (mon.0) 1037 : audit [INF] from='client.? 192.168.123.100:0/2838742830' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/486507578"}]': finished 2026-03-09T18:38:50.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:49 vm00 bash[22468]: cluster 2026-03-09T18:38:48.775734+0000 mon.a (mon.0) 1038 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-09T18:38:50.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:49 vm00 bash[22468]: audit 2026-03-09T18:38:48.966146+0000 mon.a (mon.0) 1039 : audit [INF] from='client.? 192.168.123.100:0/627266774' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3302485175"}]: dispatch 2026-03-09T18:38:50.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:49 vm00 bash[22468]: audit 2026-03-09T18:38:49.544556+0000 mon.a (mon.0) 1040 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:50.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:49 vm00 bash[22468]: audit 2026-03-09T18:38:49.552140+0000 mon.a (mon.0) 1041 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:50.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:49 vm00 bash[22468]: audit 2026-03-09T18:38:49.640634+0000 mon.a (mon.0) 1042 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:50.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:49 vm00 bash[22468]: audit 2026-03-09T18:38:49.646284+0000 mon.a (mon.0) 1043 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:50.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:49 vm00 bash[22468]: audit 2026-03-09T18:38:49.648267+0000 mon.a (mon.0) 1044 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:38:50.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:49 vm00 bash[22468]: audit 2026-03-09T18:38:49.648845+0000 mon.a (mon.0) 1045 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:38:50.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:49 vm00 bash[22468]: audit 2026-03-09T18:38:49.652449+0000 mon.a (mon.0) 1046 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:50.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:49 vm00 bash[17468]: audit 2026-03-09T18:38:48.775618+0000 mon.a (mon.0) 1037 : audit [INF] from='client.? 192.168.123.100:0/2838742830' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/486507578"}]': finished 2026-03-09T18:38:50.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:49 vm00 bash[17468]: cluster 2026-03-09T18:38:48.775734+0000 mon.a (mon.0) 1038 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-09T18:38:50.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:49 vm00 bash[17468]: audit 2026-03-09T18:38:48.966146+0000 mon.a (mon.0) 1039 : audit [INF] from='client.? 192.168.123.100:0/627266774' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3302485175"}]: dispatch 2026-03-09T18:38:50.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:49 vm00 bash[17468]: audit 2026-03-09T18:38:49.544556+0000 mon.a (mon.0) 1040 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:50.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:49 vm00 bash[17468]: audit 2026-03-09T18:38:49.552140+0000 mon.a (mon.0) 1041 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:50.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:49 vm00 bash[17468]: audit 2026-03-09T18:38:49.640634+0000 mon.a (mon.0) 1042 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:50.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:49 vm00 bash[17468]: audit 2026-03-09T18:38:49.646284+0000 mon.a (mon.0) 1043 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:50.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:49 vm00 bash[17468]: audit 2026-03-09T18:38:49.648267+0000 mon.a (mon.0) 1044 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:38:50.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:49 vm00 bash[17468]: audit 2026-03-09T18:38:49.648845+0000 mon.a (mon.0) 1045 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:38:50.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:49 vm00 bash[17468]: audit 2026-03-09T18:38:49.652449+0000 mon.a (mon.0) 1046 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:50.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:49 vm08 bash[17774]: audit 2026-03-09T18:38:48.775618+0000 mon.a (mon.0) 1037 : audit [INF] from='client.? 192.168.123.100:0/2838742830' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/486507578"}]': finished 2026-03-09T18:38:50.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:49 vm08 bash[17774]: cluster 2026-03-09T18:38:48.775734+0000 mon.a (mon.0) 1038 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-09T18:38:50.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:49 vm08 bash[17774]: audit 2026-03-09T18:38:48.966146+0000 mon.a (mon.0) 1039 : audit [INF] from='client.? 192.168.123.100:0/627266774' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3302485175"}]: dispatch 2026-03-09T18:38:50.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:49 vm08 bash[17774]: audit 2026-03-09T18:38:49.544556+0000 mon.a (mon.0) 1040 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:50.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:49 vm08 bash[17774]: audit 2026-03-09T18:38:49.552140+0000 mon.a (mon.0) 1041 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:50.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:49 vm08 bash[17774]: audit 2026-03-09T18:38:49.640634+0000 mon.a (mon.0) 1042 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:50.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:49 vm08 bash[17774]: audit 2026-03-09T18:38:49.646284+0000 mon.a (mon.0) 1043 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:50.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:49 vm08 bash[17774]: audit 2026-03-09T18:38:49.648267+0000 mon.a (mon.0) 1044 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:38:50.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:49 vm08 bash[17774]: audit 2026-03-09T18:38:49.648845+0000 mon.a (mon.0) 1045 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:38:50.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:49 vm08 bash[17774]: audit 2026-03-09T18:38:49.652449+0000 mon.a (mon.0) 1046 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:38:51.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:50 vm00 bash[22468]: audit 2026-03-09T18:38:49.783918+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? 192.168.123.100:0/627266774' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3302485175"}]': finished 2026-03-09T18:38:51.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:50 vm00 bash[22468]: cluster 2026-03-09T18:38:49.784005+0000 mon.a (mon.0) 1048 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-09T18:38:51.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:50 vm00 bash[22468]: cluster 2026-03-09T18:38:49.788943+0000 mgr.x (mgr.24833) 29 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T18:38:51.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:50 vm00 bash[17468]: audit 2026-03-09T18:38:49.783918+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? 192.168.123.100:0/627266774' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3302485175"}]': finished 2026-03-09T18:38:51.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:50 vm00 bash[17468]: cluster 2026-03-09T18:38:49.784005+0000 mon.a (mon.0) 1048 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-09T18:38:51.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:50 vm00 bash[17468]: cluster 2026-03-09T18:38:49.788943+0000 mgr.x (mgr.24833) 29 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T18:38:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:50 vm08 bash[17774]: audit 2026-03-09T18:38:49.783918+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? 192.168.123.100:0/627266774' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/3302485175"}]': finished 2026-03-09T18:38:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:50 vm08 bash[17774]: cluster 2026-03-09T18:38:49.784005+0000 mon.a (mon.0) 1048 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-09T18:38:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:50 vm08 bash[17774]: cluster 2026-03-09T18:38:49.788943+0000 mgr.x (mgr.24833) 29 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T18:38:52.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:51 vm00 bash[22468]: audit 2026-03-09T18:38:51.210973+0000 mon.a (mon.0) 1049 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:38:52.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:51 vm00 bash[17468]: audit 2026-03-09T18:38:51.210973+0000 mon.a (mon.0) 1049 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:38:52.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:51 vm08 bash[17774]: audit 2026-03-09T18:38:51.210973+0000 mon.a (mon.0) 1049 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:38:53.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:52 vm00 bash[22468]: cluster 2026-03-09T18:38:51.789303+0000 mgr.x (mgr.24833) 30 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:38:53.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:52 vm00 bash[17468]: cluster 2026-03-09T18:38:51.789303+0000 mgr.x (mgr.24833) 30 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:38:53.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:52 vm08 bash[17774]: cluster 2026-03-09T18:38:51.789303+0000 mgr.x (mgr.24833) 30 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:38:53.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:38:53 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:38:53] "GET /metrics HTTP/1.1" 200 37536 "" "Prometheus/2.51.0" 2026-03-09T18:38:55.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:54 vm00 bash[22468]: audit 2026-03-09T18:38:53.740693+0000 mgr.x (mgr.24833) 31 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:38:55.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:54 vm00 bash[22468]: cluster 2026-03-09T18:38:53.789807+0000 mgr.x (mgr.24833) 32 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 847 B/s rd, 0 op/s 2026-03-09T18:38:55.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:54 vm00 bash[17468]: audit 2026-03-09T18:38:53.740693+0000 mgr.x (mgr.24833) 31 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:38:55.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:54 vm00 bash[17468]: cluster 2026-03-09T18:38:53.789807+0000 mgr.x (mgr.24833) 32 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 847 B/s rd, 0 op/s 2026-03-09T18:38:55.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:54 vm08 bash[17774]: audit 2026-03-09T18:38:53.740693+0000 mgr.x (mgr.24833) 31 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:38:55.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:54 vm08 bash[17774]: cluster 2026-03-09T18:38:53.789807+0000 mgr.x (mgr.24833) 32 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 847 B/s rd, 0 op/s 2026-03-09T18:38:57.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:56 vm00 bash[22468]: cluster 2026-03-09T18:38:55.790060+0000 mgr.x (mgr.24833) 33 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:38:57.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:56 vm00 bash[17468]: cluster 2026-03-09T18:38:55.790060+0000 mgr.x (mgr.24833) 33 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:38:57.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:56 vm08 bash[17774]: cluster 2026-03-09T18:38:55.790060+0000 mgr.x (mgr.24833) 33 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:38:59.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:38:58 vm00 bash[22468]: cluster 2026-03-09T18:38:57.790542+0000 mgr.x (mgr.24833) 34 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:38:59.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:38:58 vm00 bash[17468]: cluster 2026-03-09T18:38:57.790542+0000 mgr.x (mgr.24833) 34 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:38:59.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:38:58 vm08 bash[17774]: cluster 2026-03-09T18:38:57.790542+0000 mgr.x (mgr.24833) 34 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:39:01.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:00 vm00 bash[22468]: cluster 2026-03-09T18:38:59.790855+0000 mgr.x (mgr.24833) 35 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1022 B/s rd, 0 op/s 2026-03-09T18:39:01.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:00 vm00 bash[17468]: cluster 2026-03-09T18:38:59.790855+0000 mgr.x (mgr.24833) 35 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1022 B/s rd, 0 op/s 2026-03-09T18:39:01.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:00 vm08 bash[17774]: cluster 2026-03-09T18:38:59.790855+0000 mgr.x (mgr.24833) 35 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1022 B/s rd, 0 op/s 2026-03-09T18:39:02.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:02 vm00 bash[22468]: cluster 2026-03-09T18:39:01.791152+0000 mgr.x (mgr.24833) 36 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:02.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:02 vm00 bash[17468]: cluster 2026-03-09T18:39:01.791152+0000 mgr.x (mgr.24833) 36 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:02.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:02 vm08 bash[17774]: cluster 2026-03-09T18:39:01.791152+0000 mgr.x (mgr.24833) 36 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:03.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:39:03 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:39:03] "GET /metrics HTTP/1.1" 200 37536 "" "Prometheus/2.51.0" 2026-03-09T18:39:05.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:04 vm00 bash[22468]: audit 2026-03-09T18:39:03.744087+0000 mgr.x (mgr.24833) 37 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:39:05.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:04 vm00 bash[22468]: cluster 2026-03-09T18:39:03.791622+0000 mgr.x (mgr.24833) 38 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:05.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:04 vm00 bash[17468]: audit 2026-03-09T18:39:03.744087+0000 mgr.x (mgr.24833) 37 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:39:05.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:04 vm00 bash[17468]: cluster 2026-03-09T18:39:03.791622+0000 mgr.x (mgr.24833) 38 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:05.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:04 vm08 bash[17774]: audit 2026-03-09T18:39:03.744087+0000 mgr.x (mgr.24833) 37 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:39:05.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:04 vm08 bash[17774]: cluster 2026-03-09T18:39:03.791622+0000 mgr.x (mgr.24833) 38 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:07.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:06 vm00 bash[22468]: cluster 2026-03-09T18:39:05.791887+0000 mgr.x (mgr.24833) 39 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:07.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:06 vm00 bash[22468]: audit 2026-03-09T18:39:06.211068+0000 mon.a (mon.0) 1050 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:39:07.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:06 vm00 bash[17468]: cluster 2026-03-09T18:39:05.791887+0000 mgr.x (mgr.24833) 39 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:07.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:06 vm00 bash[17468]: audit 2026-03-09T18:39:06.211068+0000 mon.a (mon.0) 1050 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:39:07.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:06 vm08 bash[17774]: cluster 2026-03-09T18:39:05.791887+0000 mgr.x (mgr.24833) 39 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:07.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:06 vm08 bash[17774]: audit 2026-03-09T18:39:06.211068+0000 mon.a (mon.0) 1050 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:39:09.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:08 vm00 bash[22468]: cluster 2026-03-09T18:39:07.792365+0000 mgr.x (mgr.24833) 40 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:09.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:08 vm00 bash[17468]: cluster 2026-03-09T18:39:07.792365+0000 mgr.x (mgr.24833) 40 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:09.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:08 vm08 bash[17774]: cluster 2026-03-09T18:39:07.792365+0000 mgr.x (mgr.24833) 40 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:11.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:10 vm08 bash[17774]: cluster 2026-03-09T18:39:09.792688+0000 mgr.x (mgr.24833) 41 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:11.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:10 vm00 bash[22468]: cluster 2026-03-09T18:39:09.792688+0000 mgr.x (mgr.24833) 41 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:11.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:10 vm00 bash[17468]: cluster 2026-03-09T18:39:09.792688+0000 mgr.x (mgr.24833) 41 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:12.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:12 vm00 bash[22468]: cluster 2026-03-09T18:39:11.792943+0000 mgr.x (mgr.24833) 42 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:12.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:12 vm00 bash[17468]: cluster 2026-03-09T18:39:11.792943+0000 mgr.x (mgr.24833) 42 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:12.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:12 vm08 bash[17774]: cluster 2026-03-09T18:39:11.792943+0000 mgr.x (mgr.24833) 42 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:13.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:39:13 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:39:13] "GET /metrics HTTP/1.1" 200 37550 "" "Prometheus/2.51.0" 2026-03-09T18:39:15.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:14 vm00 bash[22468]: audit 2026-03-09T18:39:13.746536+0000 mgr.x (mgr.24833) 43 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:39:15.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:14 vm00 bash[22468]: cluster 2026-03-09T18:39:13.793465+0000 mgr.x (mgr.24833) 44 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:15.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:14 vm00 bash[17468]: audit 2026-03-09T18:39:13.746536+0000 mgr.x (mgr.24833) 43 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:39:15.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:14 vm00 bash[17468]: cluster 2026-03-09T18:39:13.793465+0000 mgr.x (mgr.24833) 44 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:15.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:14 vm08 bash[17774]: audit 2026-03-09T18:39:13.746536+0000 mgr.x (mgr.24833) 43 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:39:15.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:14 vm08 bash[17774]: cluster 2026-03-09T18:39:13.793465+0000 mgr.x (mgr.24833) 44 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:17.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:16 vm00 bash[22468]: cluster 2026-03-09T18:39:15.793724+0000 mgr.x (mgr.24833) 45 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:17.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:16 vm00 bash[17468]: cluster 2026-03-09T18:39:15.793724+0000 mgr.x (mgr.24833) 45 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:17.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:16 vm08 bash[17774]: cluster 2026-03-09T18:39:15.793724+0000 mgr.x (mgr.24833) 45 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:19.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:18 vm00 bash[22468]: cluster 2026-03-09T18:39:17.794205+0000 mgr.x (mgr.24833) 46 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:19.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:18 vm00 bash[17468]: cluster 2026-03-09T18:39:17.794205+0000 mgr.x (mgr.24833) 46 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:19.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:18 vm08 bash[17774]: cluster 2026-03-09T18:39:17.794205+0000 mgr.x (mgr.24833) 46 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:21.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:20 vm08 bash[17774]: cluster 2026-03-09T18:39:19.794553+0000 mgr.x (mgr.24833) 47 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:21.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:20 vm00 bash[22468]: cluster 2026-03-09T18:39:19.794553+0000 mgr.x (mgr.24833) 47 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:21.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:20 vm00 bash[17468]: cluster 2026-03-09T18:39:19.794553+0000 mgr.x (mgr.24833) 47 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:22.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:21 vm08 bash[17774]: audit 2026-03-09T18:39:21.211231+0000 mon.a (mon.0) 1051 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:39:22.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:21 vm00 bash[22468]: audit 2026-03-09T18:39:21.211231+0000 mon.a (mon.0) 1051 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:39:22.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:21 vm00 bash[17468]: audit 2026-03-09T18:39:21.211231+0000 mon.a (mon.0) 1051 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:39:23.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:22 vm08 bash[17774]: cluster 2026-03-09T18:39:21.794929+0000 mgr.x (mgr.24833) 48 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:23.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:22 vm00 bash[22468]: cluster 2026-03-09T18:39:21.794929+0000 mgr.x (mgr.24833) 48 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:23.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:22 vm00 bash[17468]: cluster 2026-03-09T18:39:21.794929+0000 mgr.x (mgr.24833) 48 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:23.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:39:23 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:39:23] "GET /metrics HTTP/1.1" 200 37544 "" "Prometheus/2.51.0" 2026-03-09T18:39:25.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:24 vm08 bash[17774]: audit 2026-03-09T18:39:23.747561+0000 mgr.x (mgr.24833) 49 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:39:25.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:24 vm08 bash[17774]: cluster 2026-03-09T18:39:23.795512+0000 mgr.x (mgr.24833) 50 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:25.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:24 vm00 bash[22468]: audit 2026-03-09T18:39:23.747561+0000 mgr.x (mgr.24833) 49 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:39:25.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:24 vm00 bash[22468]: cluster 2026-03-09T18:39:23.795512+0000 mgr.x (mgr.24833) 50 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:25.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:24 vm00 bash[17468]: audit 2026-03-09T18:39:23.747561+0000 mgr.x (mgr.24833) 49 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:39:25.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:24 vm00 bash[17468]: cluster 2026-03-09T18:39:23.795512+0000 mgr.x (mgr.24833) 50 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:27.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:26 vm08 bash[17774]: cluster 2026-03-09T18:39:25.795780+0000 mgr.x (mgr.24833) 51 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:27.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:26 vm00 bash[22468]: cluster 2026-03-09T18:39:25.795780+0000 mgr.x (mgr.24833) 51 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:27.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:26 vm00 bash[17468]: cluster 2026-03-09T18:39:25.795780+0000 mgr.x (mgr.24833) 51 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:29.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:28 vm08 bash[17774]: cluster 2026-03-09T18:39:27.796368+0000 mgr.x (mgr.24833) 52 : cluster [DBG] pgmap v35: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:29.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:28 vm00 bash[22468]: cluster 2026-03-09T18:39:27.796368+0000 mgr.x (mgr.24833) 52 : cluster [DBG] pgmap v35: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:29.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:28 vm00 bash[17468]: cluster 2026-03-09T18:39:27.796368+0000 mgr.x (mgr.24833) 52 : cluster [DBG] pgmap v35: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:31.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:30 vm08 bash[17774]: cluster 2026-03-09T18:39:29.796644+0000 mgr.x (mgr.24833) 53 : cluster [DBG] pgmap v36: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:31.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:30 vm00 bash[22468]: cluster 2026-03-09T18:39:29.796644+0000 mgr.x (mgr.24833) 53 : cluster [DBG] pgmap v36: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:31.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:30 vm00 bash[17468]: cluster 2026-03-09T18:39:29.796644+0000 mgr.x (mgr.24833) 53 : cluster [DBG] pgmap v36: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:32.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:32 vm00 bash[22468]: cluster 2026-03-09T18:39:31.796948+0000 mgr.x (mgr.24833) 54 : cluster [DBG] pgmap v37: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:32.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:32 vm00 bash[17468]: cluster 2026-03-09T18:39:31.796948+0000 mgr.x (mgr.24833) 54 : cluster [DBG] pgmap v37: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:32.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:32 vm08 bash[17774]: cluster 2026-03-09T18:39:31.796948+0000 mgr.x (mgr.24833) 54 : cluster [DBG] pgmap v37: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:33.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:39:33 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:39:33] "GET /metrics HTTP/1.1" 200 37544 "" "Prometheus/2.51.0" 2026-03-09T18:39:35.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:34 vm00 bash[22468]: audit 2026-03-09T18:39:33.755844+0000 mgr.x (mgr.24833) 55 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:39:35.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:34 vm00 bash[22468]: cluster 2026-03-09T18:39:33.797449+0000 mgr.x (mgr.24833) 56 : cluster [DBG] pgmap v38: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:35.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:34 vm00 bash[17468]: audit 2026-03-09T18:39:33.755844+0000 mgr.x (mgr.24833) 55 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:39:35.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:34 vm00 bash[17468]: cluster 2026-03-09T18:39:33.797449+0000 mgr.x (mgr.24833) 56 : cluster [DBG] pgmap v38: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:35.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:34 vm08 bash[17774]: audit 2026-03-09T18:39:33.755844+0000 mgr.x (mgr.24833) 55 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:39:35.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:34 vm08 bash[17774]: cluster 2026-03-09T18:39:33.797449+0000 mgr.x (mgr.24833) 56 : cluster [DBG] pgmap v38: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:37.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:36 vm00 bash[22468]: cluster 2026-03-09T18:39:35.797760+0000 mgr.x (mgr.24833) 57 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:37.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:36 vm00 bash[22468]: audit 2026-03-09T18:39:36.211125+0000 mon.a (mon.0) 1052 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:39:37.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:36 vm00 bash[17468]: cluster 2026-03-09T18:39:35.797760+0000 mgr.x (mgr.24833) 57 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:37.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:36 vm00 bash[17468]: audit 2026-03-09T18:39:36.211125+0000 mon.a (mon.0) 1052 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:39:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:36 vm08 bash[17774]: cluster 2026-03-09T18:39:35.797760+0000 mgr.x (mgr.24833) 57 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:36 vm08 bash[17774]: audit 2026-03-09T18:39:36.211125+0000 mon.a (mon.0) 1052 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:39:39.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:38 vm00 bash[22468]: cluster 2026-03-09T18:39:37.798287+0000 mgr.x (mgr.24833) 58 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:39.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:38 vm00 bash[17468]: cluster 2026-03-09T18:39:37.798287+0000 mgr.x (mgr.24833) 58 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:39.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:38 vm08 bash[17774]: cluster 2026-03-09T18:39:37.798287+0000 mgr.x (mgr.24833) 58 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:41.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:40 vm00 bash[22468]: cluster 2026-03-09T18:39:39.798583+0000 mgr.x (mgr.24833) 59 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:41.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:40 vm00 bash[17468]: cluster 2026-03-09T18:39:39.798583+0000 mgr.x (mgr.24833) 59 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:41.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:40 vm08 bash[17774]: cluster 2026-03-09T18:39:39.798583+0000 mgr.x (mgr.24833) 59 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:42.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:42 vm00 bash[22468]: cluster 2026-03-09T18:39:41.798890+0000 mgr.x (mgr.24833) 60 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:42.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:42 vm00 bash[17468]: cluster 2026-03-09T18:39:41.798890+0000 mgr.x (mgr.24833) 60 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:42.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:42 vm08 bash[17774]: cluster 2026-03-09T18:39:41.798890+0000 mgr.x (mgr.24833) 60 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:43.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:39:43 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:39:43] "GET /metrics HTTP/1.1" 200 37546 "" "Prometheus/2.51.0" 2026-03-09T18:39:45.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:44 vm00 bash[22468]: audit 2026-03-09T18:39:43.759295+0000 mgr.x (mgr.24833) 61 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:39:45.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:44 vm00 bash[22468]: cluster 2026-03-09T18:39:43.799476+0000 mgr.x (mgr.24833) 62 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:45.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:44 vm00 bash[17468]: audit 2026-03-09T18:39:43.759295+0000 mgr.x (mgr.24833) 61 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:39:45.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:44 vm00 bash[17468]: cluster 2026-03-09T18:39:43.799476+0000 mgr.x (mgr.24833) 62 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:44 vm08 bash[17774]: audit 2026-03-09T18:39:43.759295+0000 mgr.x (mgr.24833) 61 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:39:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:44 vm08 bash[17774]: cluster 2026-03-09T18:39:43.799476+0000 mgr.x (mgr.24833) 62 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:47.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:46 vm00 bash[22468]: cluster 2026-03-09T18:39:45.800051+0000 mgr.x (mgr.24833) 63 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:47.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:46 vm00 bash[17468]: cluster 2026-03-09T18:39:45.800051+0000 mgr.x (mgr.24833) 63 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:47.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:46 vm08 bash[17774]: cluster 2026-03-09T18:39:45.800051+0000 mgr.x (mgr.24833) 63 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:49.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:48 vm00 bash[22468]: cluster 2026-03-09T18:39:47.800560+0000 mgr.x (mgr.24833) 64 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:49.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:48 vm00 bash[17468]: cluster 2026-03-09T18:39:47.800560+0000 mgr.x (mgr.24833) 64 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:49.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:48 vm08 bash[17774]: cluster 2026-03-09T18:39:47.800560+0000 mgr.x (mgr.24833) 64 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:50.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:49 vm08 bash[17774]: audit 2026-03-09T18:39:49.694612+0000 mon.a (mon.0) 1053 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:39:50.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:49 vm00 bash[17468]: audit 2026-03-09T18:39:49.694612+0000 mon.a (mon.0) 1053 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:39:50.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:49 vm00 bash[22468]: audit 2026-03-09T18:39:49.694612+0000 mon.a (mon.0) 1053 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:39:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:50 vm08 bash[17774]: cluster 2026-03-09T18:39:49.800856+0000 mgr.x (mgr.24833) 65 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:50 vm08 bash[17774]: audit 2026-03-09T18:39:49.997532+0000 mon.a (mon.0) 1054 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:39:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:50 vm08 bash[17774]: audit 2026-03-09T18:39:49.998266+0000 mon.a (mon.0) 1055 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:39:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:50 vm08 bash[17774]: audit 2026-03-09T18:39:50.006381+0000 mon.a (mon.0) 1056 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:39:51.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:50 vm00 bash[22468]: cluster 2026-03-09T18:39:49.800856+0000 mgr.x (mgr.24833) 65 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:51.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:50 vm00 bash[22468]: audit 2026-03-09T18:39:49.997532+0000 mon.a (mon.0) 1054 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:39:51.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:50 vm00 bash[22468]: audit 2026-03-09T18:39:49.998266+0000 mon.a (mon.0) 1055 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:39:51.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:50 vm00 bash[22468]: audit 2026-03-09T18:39:50.006381+0000 mon.a (mon.0) 1056 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:39:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:50 vm00 bash[17468]: cluster 2026-03-09T18:39:49.800856+0000 mgr.x (mgr.24833) 65 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:50 vm00 bash[17468]: audit 2026-03-09T18:39:49.997532+0000 mon.a (mon.0) 1054 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:39:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:50 vm00 bash[17468]: audit 2026-03-09T18:39:49.998266+0000 mon.a (mon.0) 1055 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:39:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:50 vm00 bash[17468]: audit 2026-03-09T18:39:50.006381+0000 mon.a (mon.0) 1056 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:39:52.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:51 vm08 bash[17774]: audit 2026-03-09T18:39:51.211385+0000 mon.a (mon.0) 1057 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:39:52.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:51 vm00 bash[22468]: audit 2026-03-09T18:39:51.211385+0000 mon.a (mon.0) 1057 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:39:52.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:51 vm00 bash[17468]: audit 2026-03-09T18:39:51.211385+0000 mon.a (mon.0) 1057 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:39:53.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:52 vm08 bash[17774]: cluster 2026-03-09T18:39:51.801148+0000 mgr.x (mgr.24833) 66 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:53.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:52 vm00 bash[17468]: cluster 2026-03-09T18:39:51.801148+0000 mgr.x (mgr.24833) 66 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:53.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:52 vm00 bash[22468]: cluster 2026-03-09T18:39:51.801148+0000 mgr.x (mgr.24833) 66 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:53.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:39:53 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:39:53] "GET /metrics HTTP/1.1" 200 37547 "" "Prometheus/2.51.0" 2026-03-09T18:39:55.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:54 vm08 bash[17774]: audit 2026-03-09T18:39:53.764442+0000 mgr.x (mgr.24833) 67 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:39:55.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:54 vm08 bash[17774]: cluster 2026-03-09T18:39:53.801587+0000 mgr.x (mgr.24833) 68 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:55.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:54 vm00 bash[17468]: audit 2026-03-09T18:39:53.764442+0000 mgr.x (mgr.24833) 67 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:39:55.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:54 vm00 bash[17468]: cluster 2026-03-09T18:39:53.801587+0000 mgr.x (mgr.24833) 68 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:55.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:54 vm00 bash[22468]: audit 2026-03-09T18:39:53.764442+0000 mgr.x (mgr.24833) 67 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:39:55.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:54 vm00 bash[22468]: cluster 2026-03-09T18:39:53.801587+0000 mgr.x (mgr.24833) 68 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:57.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:56 vm08 bash[17774]: cluster 2026-03-09T18:39:55.801831+0000 mgr.x (mgr.24833) 69 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:57.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:56 vm00 bash[17468]: cluster 2026-03-09T18:39:55.801831+0000 mgr.x (mgr.24833) 69 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:57.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:56 vm00 bash[22468]: cluster 2026-03-09T18:39:55.801831+0000 mgr.x (mgr.24833) 69 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:39:59.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:39:58 vm08 bash[17774]: cluster 2026-03-09T18:39:57.802346+0000 mgr.x (mgr.24833) 70 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:59.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:39:58 vm00 bash[17468]: cluster 2026-03-09T18:39:57.802346+0000 mgr.x (mgr.24833) 70 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:39:59.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:39:58 vm00 bash[22468]: cluster 2026-03-09T18:39:57.802346+0000 mgr.x (mgr.24833) 70 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:01.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:01 vm00 bash[22468]: cluster 2026-03-09T18:39:59.802714+0000 mgr.x (mgr.24833) 71 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:01.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:01 vm00 bash[22468]: cluster 2026-03-09T18:40:00.000104+0000 mon.a (mon.0) 1058 : cluster [INF] overall HEALTH_OK 2026-03-09T18:40:01.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:00 vm00 bash[17468]: cluster 2026-03-09T18:39:59.802714+0000 mgr.x (mgr.24833) 71 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:01.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:00 vm00 bash[17468]: cluster 2026-03-09T18:40:00.000104+0000 mon.a (mon.0) 1058 : cluster [INF] overall HEALTH_OK 2026-03-09T18:40:01.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:00 vm08 bash[17774]: cluster 2026-03-09T18:39:59.802714+0000 mgr.x (mgr.24833) 71 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:01.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:00 vm08 bash[17774]: cluster 2026-03-09T18:40:00.000104+0000 mon.a (mon.0) 1058 : cluster [INF] overall HEALTH_OK 2026-03-09T18:40:02.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:02 vm00 bash[17468]: cluster 2026-03-09T18:40:01.802996+0000 mgr.x (mgr.24833) 72 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:02.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:02 vm00 bash[22468]: cluster 2026-03-09T18:40:01.802996+0000 mgr.x (mgr.24833) 72 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:02.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:02 vm08 bash[17774]: cluster 2026-03-09T18:40:01.802996+0000 mgr.x (mgr.24833) 72 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:03.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:40:03 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:40:03] "GET /metrics HTTP/1.1" 200 37547 "" "Prometheus/2.51.0" 2026-03-09T18:40:05.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:04 vm00 bash[17468]: audit 2026-03-09T18:40:03.766277+0000 mgr.x (mgr.24833) 73 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:40:05.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:04 vm00 bash[17468]: cluster 2026-03-09T18:40:03.803415+0000 mgr.x (mgr.24833) 74 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:05.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:04 vm00 bash[22468]: audit 2026-03-09T18:40:03.766277+0000 mgr.x (mgr.24833) 73 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:40:05.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:04 vm00 bash[22468]: cluster 2026-03-09T18:40:03.803415+0000 mgr.x (mgr.24833) 74 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:05.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:04 vm08 bash[17774]: audit 2026-03-09T18:40:03.766277+0000 mgr.x (mgr.24833) 73 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:40:05.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:04 vm08 bash[17774]: cluster 2026-03-09T18:40:03.803415+0000 mgr.x (mgr.24833) 74 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:07.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:06 vm00 bash[17468]: cluster 2026-03-09T18:40:05.803716+0000 mgr.x (mgr.24833) 75 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T18:40:07.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:06 vm00 bash[17468]: audit 2026-03-09T18:40:06.211500+0000 mon.a (mon.0) 1059 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:40:07.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:06 vm00 bash[22468]: cluster 2026-03-09T18:40:05.803716+0000 mgr.x (mgr.24833) 75 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T18:40:07.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:06 vm00 bash[22468]: audit 2026-03-09T18:40:06.211500+0000 mon.a (mon.0) 1059 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:40:07.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:06 vm08 bash[17774]: cluster 2026-03-09T18:40:05.803716+0000 mgr.x (mgr.24833) 75 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T18:40:07.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:06 vm08 bash[17774]: audit 2026-03-09T18:40:06.211500+0000 mon.a (mon.0) 1059 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:40:09.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:08 vm00 bash[17468]: cluster 2026-03-09T18:40:07.804347+0000 mgr.x (mgr.24833) 76 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:09.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:08 vm00 bash[22468]: cluster 2026-03-09T18:40:07.804347+0000 mgr.x (mgr.24833) 76 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:09.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:08 vm08 bash[17774]: cluster 2026-03-09T18:40:07.804347+0000 mgr.x (mgr.24833) 76 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:11.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:10 vm00 bash[17468]: cluster 2026-03-09T18:40:09.804608+0000 mgr.x (mgr.24833) 77 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T18:40:11.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:10 vm00 bash[22468]: cluster 2026-03-09T18:40:09.804608+0000 mgr.x (mgr.24833) 77 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T18:40:11.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:10 vm08 bash[17774]: cluster 2026-03-09T18:40:09.804608+0000 mgr.x (mgr.24833) 77 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T18:40:12.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:12 vm00 bash[17468]: cluster 2026-03-09T18:40:11.804920+0000 mgr.x (mgr.24833) 78 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T18:40:12.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:12 vm00 bash[22468]: cluster 2026-03-09T18:40:11.804920+0000 mgr.x (mgr.24833) 78 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T18:40:12.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:12 vm08 bash[17774]: cluster 2026-03-09T18:40:11.804920+0000 mgr.x (mgr.24833) 78 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T18:40:13.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:40:13 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:40:13] "GET /metrics HTTP/1.1" 200 37543 "" "Prometheus/2.51.0" 2026-03-09T18:40:15.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:14 vm00 bash[17468]: audit 2026-03-09T18:40:13.776996+0000 mgr.x (mgr.24833) 79 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:40:15.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:14 vm00 bash[17468]: cluster 2026-03-09T18:40:13.805369+0000 mgr.x (mgr.24833) 80 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:15.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:14 vm00 bash[22468]: audit 2026-03-09T18:40:13.776996+0000 mgr.x (mgr.24833) 79 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:40:15.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:14 vm00 bash[22468]: cluster 2026-03-09T18:40:13.805369+0000 mgr.x (mgr.24833) 80 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:15.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:14 vm08 bash[17774]: audit 2026-03-09T18:40:13.776996+0000 mgr.x (mgr.24833) 79 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:40:15.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:14 vm08 bash[17774]: cluster 2026-03-09T18:40:13.805369+0000 mgr.x (mgr.24833) 80 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:17.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:16 vm00 bash[17468]: cluster 2026-03-09T18:40:15.805611+0000 mgr.x (mgr.24833) 81 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:17.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:16 vm00 bash[22468]: cluster 2026-03-09T18:40:15.805611+0000 mgr.x (mgr.24833) 81 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:17.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:16 vm08 bash[17774]: cluster 2026-03-09T18:40:15.805611+0000 mgr.x (mgr.24833) 81 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:19.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:18 vm00 bash[17468]: cluster 2026-03-09T18:40:17.806100+0000 mgr.x (mgr.24833) 82 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:19.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:18 vm00 bash[22468]: cluster 2026-03-09T18:40:17.806100+0000 mgr.x (mgr.24833) 82 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:19.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:18 vm08 bash[17774]: cluster 2026-03-09T18:40:17.806100+0000 mgr.x (mgr.24833) 82 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:21.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:20 vm08 bash[17774]: cluster 2026-03-09T18:40:19.806392+0000 mgr.x (mgr.24833) 83 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:21.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:20 vm00 bash[17468]: cluster 2026-03-09T18:40:19.806392+0000 mgr.x (mgr.24833) 83 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:21.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:20 vm00 bash[22468]: cluster 2026-03-09T18:40:19.806392+0000 mgr.x (mgr.24833) 83 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:22.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:21 vm08 bash[17774]: audit 2026-03-09T18:40:21.211687+0000 mon.a (mon.0) 1060 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:40:22.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:21 vm00 bash[17468]: audit 2026-03-09T18:40:21.211687+0000 mon.a (mon.0) 1060 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:40:22.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:21 vm00 bash[22468]: audit 2026-03-09T18:40:21.211687+0000 mon.a (mon.0) 1060 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:40:23.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:22 vm08 bash[17774]: cluster 2026-03-09T18:40:21.806687+0000 mgr.x (mgr.24833) 84 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:23.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:22 vm00 bash[17468]: cluster 2026-03-09T18:40:21.806687+0000 mgr.x (mgr.24833) 84 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:23.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:22 vm00 bash[22468]: cluster 2026-03-09T18:40:21.806687+0000 mgr.x (mgr.24833) 84 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:23.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:40:23 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:40:23] "GET /metrics HTTP/1.1" 200 37546 "" "Prometheus/2.51.0" 2026-03-09T18:40:25.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:24 vm08 bash[17774]: audit 2026-03-09T18:40:23.784101+0000 mgr.x (mgr.24833) 85 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:40:25.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:24 vm08 bash[17774]: cluster 2026-03-09T18:40:23.807144+0000 mgr.x (mgr.24833) 86 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:25.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:24 vm00 bash[22468]: audit 2026-03-09T18:40:23.784101+0000 mgr.x (mgr.24833) 85 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:40:25.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:24 vm00 bash[22468]: cluster 2026-03-09T18:40:23.807144+0000 mgr.x (mgr.24833) 86 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:25.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:24 vm00 bash[17468]: audit 2026-03-09T18:40:23.784101+0000 mgr.x (mgr.24833) 85 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:40:25.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:24 vm00 bash[17468]: cluster 2026-03-09T18:40:23.807144+0000 mgr.x (mgr.24833) 86 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:27.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:26 vm08 bash[17774]: cluster 2026-03-09T18:40:25.807436+0000 mgr.x (mgr.24833) 87 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:27.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:26 vm00 bash[22468]: cluster 2026-03-09T18:40:25.807436+0000 mgr.x (mgr.24833) 87 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:27.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:26 vm00 bash[17468]: cluster 2026-03-09T18:40:25.807436+0000 mgr.x (mgr.24833) 87 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:29.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:28 vm08 bash[17774]: cluster 2026-03-09T18:40:27.808035+0000 mgr.x (mgr.24833) 88 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:29.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:28 vm00 bash[22468]: cluster 2026-03-09T18:40:27.808035+0000 mgr.x (mgr.24833) 88 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:29.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:28 vm00 bash[17468]: cluster 2026-03-09T18:40:27.808035+0000 mgr.x (mgr.24833) 88 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:31.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:30 vm08 bash[17774]: cluster 2026-03-09T18:40:29.808350+0000 mgr.x (mgr.24833) 89 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:31.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:30 vm00 bash[22468]: cluster 2026-03-09T18:40:29.808350+0000 mgr.x (mgr.24833) 89 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:31.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:30 vm00 bash[17468]: cluster 2026-03-09T18:40:29.808350+0000 mgr.x (mgr.24833) 89 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:32.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:32 vm00 bash[22468]: cluster 2026-03-09T18:40:31.808570+0000 mgr.x (mgr.24833) 90 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:32.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:32 vm00 bash[17468]: cluster 2026-03-09T18:40:31.808570+0000 mgr.x (mgr.24833) 90 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:32.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:32 vm08 bash[17774]: cluster 2026-03-09T18:40:31.808570+0000 mgr.x (mgr.24833) 90 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:33.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:40:33 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:40:33] "GET /metrics HTTP/1.1" 200 37546 "" "Prometheus/2.51.0" 2026-03-09T18:40:35.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:34 vm00 bash[22468]: audit 2026-03-09T18:40:33.794860+0000 mgr.x (mgr.24833) 91 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:40:35.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:34 vm00 bash[22468]: cluster 2026-03-09T18:40:33.809035+0000 mgr.x (mgr.24833) 92 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:35.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:34 vm00 bash[17468]: audit 2026-03-09T18:40:33.794860+0000 mgr.x (mgr.24833) 91 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:40:35.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:34 vm00 bash[17468]: cluster 2026-03-09T18:40:33.809035+0000 mgr.x (mgr.24833) 92 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:35.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:34 vm08 bash[17774]: audit 2026-03-09T18:40:33.794860+0000 mgr.x (mgr.24833) 91 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:40:35.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:34 vm08 bash[17774]: cluster 2026-03-09T18:40:33.809035+0000 mgr.x (mgr.24833) 92 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:37.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:36 vm00 bash[22468]: cluster 2026-03-09T18:40:35.809301+0000 mgr.x (mgr.24833) 93 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:37.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:36 vm00 bash[22468]: audit 2026-03-09T18:40:36.211847+0000 mon.a (mon.0) 1061 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:40:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:36 vm00 bash[17468]: cluster 2026-03-09T18:40:35.809301+0000 mgr.x (mgr.24833) 93 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:36 vm00 bash[17468]: audit 2026-03-09T18:40:36.211847+0000 mon.a (mon.0) 1061 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:40:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:36 vm08 bash[17774]: cluster 2026-03-09T18:40:35.809301+0000 mgr.x (mgr.24833) 93 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:36 vm08 bash[17774]: audit 2026-03-09T18:40:36.211847+0000 mon.a (mon.0) 1061 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:40:39.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:38 vm08 bash[17774]: cluster 2026-03-09T18:40:37.809823+0000 mgr.x (mgr.24833) 94 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:39.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:38 vm00 bash[22468]: cluster 2026-03-09T18:40:37.809823+0000 mgr.x (mgr.24833) 94 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:39.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:38 vm00 bash[17468]: cluster 2026-03-09T18:40:37.809823+0000 mgr.x (mgr.24833) 94 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:41.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:40 vm08 bash[17774]: cluster 2026-03-09T18:40:39.810138+0000 mgr.x (mgr.24833) 95 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:41.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:40 vm00 bash[22468]: cluster 2026-03-09T18:40:39.810138+0000 mgr.x (mgr.24833) 95 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:41.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:40 vm00 bash[17468]: cluster 2026-03-09T18:40:39.810138+0000 mgr.x (mgr.24833) 95 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:42.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:42 vm08 bash[17774]: cluster 2026-03-09T18:40:41.810414+0000 mgr.x (mgr.24833) 96 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:43.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:42 vm00 bash[22468]: cluster 2026-03-09T18:40:41.810414+0000 mgr.x (mgr.24833) 96 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:43.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:42 vm00 bash[17468]: cluster 2026-03-09T18:40:41.810414+0000 mgr.x (mgr.24833) 96 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:43.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:40:43 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:40:43] "GET /metrics HTTP/1.1" 200 37545 "" "Prometheus/2.51.0" 2026-03-09T18:40:45.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:44 vm00 bash[22468]: audit 2026-03-09T18:40:43.802049+0000 mgr.x (mgr.24833) 97 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:40:45.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:44 vm00 bash[22468]: cluster 2026-03-09T18:40:43.810951+0000 mgr.x (mgr.24833) 98 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:45.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:44 vm00 bash[17468]: audit 2026-03-09T18:40:43.802049+0000 mgr.x (mgr.24833) 97 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:40:45.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:44 vm00 bash[17468]: cluster 2026-03-09T18:40:43.810951+0000 mgr.x (mgr.24833) 98 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:44 vm08 bash[17774]: audit 2026-03-09T18:40:43.802049+0000 mgr.x (mgr.24833) 97 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:40:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:44 vm08 bash[17774]: cluster 2026-03-09T18:40:43.810951+0000 mgr.x (mgr.24833) 98 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:47.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:46 vm00 bash[22468]: cluster 2026-03-09T18:40:45.811271+0000 mgr.x (mgr.24833) 99 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:47.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:46 vm00 bash[17468]: cluster 2026-03-09T18:40:45.811271+0000 mgr.x (mgr.24833) 99 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:47.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:46 vm08 bash[17774]: cluster 2026-03-09T18:40:45.811271+0000 mgr.x (mgr.24833) 99 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:49.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:48 vm00 bash[22468]: cluster 2026-03-09T18:40:47.811765+0000 mgr.x (mgr.24833) 100 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:49.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:48 vm00 bash[17468]: cluster 2026-03-09T18:40:47.811765+0000 mgr.x (mgr.24833) 100 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:49.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:48 vm08 bash[17774]: cluster 2026-03-09T18:40:47.811765+0000 mgr.x (mgr.24833) 100 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:50 vm08 bash[17774]: cluster 2026-03-09T18:40:49.812088+0000 mgr.x (mgr.24833) 101 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:50 vm08 bash[17774]: audit 2026-03-09T18:40:50.044827+0000 mon.a (mon.0) 1062 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:40:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:50 vm08 bash[17774]: audit 2026-03-09T18:40:50.315950+0000 mon.a (mon.0) 1063 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:40:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:50 vm08 bash[17774]: audit 2026-03-09T18:40:50.321121+0000 mon.a (mon.0) 1064 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:40:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:50 vm08 bash[17774]: audit 2026-03-09T18:40:50.325998+0000 mon.a (mon.0) 1065 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:40:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:50 vm08 bash[17774]: audit 2026-03-09T18:40:50.334318+0000 mon.a (mon.0) 1066 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:40:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:50 vm08 bash[17774]: audit 2026-03-09T18:40:50.629201+0000 mon.a (mon.0) 1067 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:40:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:50 vm08 bash[17774]: audit 2026-03-09T18:40:50.629787+0000 mon.a (mon.0) 1068 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:40:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:50 vm08 bash[17774]: audit 2026-03-09T18:40:50.634827+0000 mon.a (mon.0) 1069 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:40:51.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:50 vm00 bash[22468]: cluster 2026-03-09T18:40:49.812088+0000 mgr.x (mgr.24833) 101 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:50 vm00 bash[22468]: audit 2026-03-09T18:40:50.044827+0000 mon.a (mon.0) 1062 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:40:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:50 vm00 bash[22468]: audit 2026-03-09T18:40:50.315950+0000 mon.a (mon.0) 1063 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:40:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:50 vm00 bash[22468]: audit 2026-03-09T18:40:50.321121+0000 mon.a (mon.0) 1064 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:40:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:50 vm00 bash[22468]: audit 2026-03-09T18:40:50.325998+0000 mon.a (mon.0) 1065 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:40:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:50 vm00 bash[22468]: audit 2026-03-09T18:40:50.334318+0000 mon.a (mon.0) 1066 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:40:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:50 vm00 bash[22468]: audit 2026-03-09T18:40:50.629201+0000 mon.a (mon.0) 1067 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:40:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:50 vm00 bash[22468]: audit 2026-03-09T18:40:50.629787+0000 mon.a (mon.0) 1068 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:40:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:50 vm00 bash[22468]: audit 2026-03-09T18:40:50.634827+0000 mon.a (mon.0) 1069 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:40:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:50 vm00 bash[17468]: cluster 2026-03-09T18:40:49.812088+0000 mgr.x (mgr.24833) 101 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:50 vm00 bash[17468]: audit 2026-03-09T18:40:50.044827+0000 mon.a (mon.0) 1062 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:40:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:50 vm00 bash[17468]: audit 2026-03-09T18:40:50.315950+0000 mon.a (mon.0) 1063 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:40:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:50 vm00 bash[17468]: audit 2026-03-09T18:40:50.321121+0000 mon.a (mon.0) 1064 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:40:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:50 vm00 bash[17468]: audit 2026-03-09T18:40:50.325998+0000 mon.a (mon.0) 1065 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:40:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:50 vm00 bash[17468]: audit 2026-03-09T18:40:50.334318+0000 mon.a (mon.0) 1066 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:40:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:50 vm00 bash[17468]: audit 2026-03-09T18:40:50.629201+0000 mon.a (mon.0) 1067 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:40:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:50 vm00 bash[17468]: audit 2026-03-09T18:40:50.629787+0000 mon.a (mon.0) 1068 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:40:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:50 vm00 bash[17468]: audit 2026-03-09T18:40:50.634827+0000 mon.a (mon.0) 1069 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:40:52.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:51 vm08 bash[17774]: audit 2026-03-09T18:40:51.211981+0000 mon.a (mon.0) 1070 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:40:52.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:51 vm00 bash[22468]: audit 2026-03-09T18:40:51.211981+0000 mon.a (mon.0) 1070 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:40:52.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:51 vm00 bash[17468]: audit 2026-03-09T18:40:51.211981+0000 mon.a (mon.0) 1070 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:40:53.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:52 vm08 bash[17774]: cluster 2026-03-09T18:40:51.812417+0000 mgr.x (mgr.24833) 102 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:53.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:52 vm00 bash[22468]: cluster 2026-03-09T18:40:51.812417+0000 mgr.x (mgr.24833) 102 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:53.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:52 vm00 bash[17468]: cluster 2026-03-09T18:40:51.812417+0000 mgr.x (mgr.24833) 102 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:53.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:40:53 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:40:53] "GET /metrics HTTP/1.1" 200 37558 "" "Prometheus/2.51.0" 2026-03-09T18:40:55.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:54 vm08 bash[17774]: audit 2026-03-09T18:40:53.811139+0000 mgr.x (mgr.24833) 103 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:40:55.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:54 vm08 bash[17774]: cluster 2026-03-09T18:40:53.812901+0000 mgr.x (mgr.24833) 104 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:55.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:54 vm00 bash[22468]: audit 2026-03-09T18:40:53.811139+0000 mgr.x (mgr.24833) 103 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:40:55.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:54 vm00 bash[22468]: cluster 2026-03-09T18:40:53.812901+0000 mgr.x (mgr.24833) 104 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:55.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:54 vm00 bash[17468]: audit 2026-03-09T18:40:53.811139+0000 mgr.x (mgr.24833) 103 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:40:55.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:54 vm00 bash[17468]: cluster 2026-03-09T18:40:53.812901+0000 mgr.x (mgr.24833) 104 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:57.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:56 vm08 bash[17774]: cluster 2026-03-09T18:40:55.813213+0000 mgr.x (mgr.24833) 105 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:57.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:56 vm00 bash[22468]: cluster 2026-03-09T18:40:55.813213+0000 mgr.x (mgr.24833) 105 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:57.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:56 vm00 bash[17468]: cluster 2026-03-09T18:40:55.813213+0000 mgr.x (mgr.24833) 105 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:40:59.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:40:58 vm08 bash[17774]: cluster 2026-03-09T18:40:57.813735+0000 mgr.x (mgr.24833) 106 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:59.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:40:58 vm00 bash[22468]: cluster 2026-03-09T18:40:57.813735+0000 mgr.x (mgr.24833) 106 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:40:59.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:40:58 vm00 bash[17468]: cluster 2026-03-09T18:40:57.813735+0000 mgr.x (mgr.24833) 106 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:41:01.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:00 vm08 bash[17774]: cluster 2026-03-09T18:40:59.814130+0000 mgr.x (mgr.24833) 107 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:01.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:00 vm00 bash[22468]: cluster 2026-03-09T18:40:59.814130+0000 mgr.x (mgr.24833) 107 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:01.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:00 vm00 bash[17468]: cluster 2026-03-09T18:40:59.814130+0000 mgr.x (mgr.24833) 107 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:02.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:02 vm08 bash[17774]: cluster 2026-03-09T18:41:01.814522+0000 mgr.x (mgr.24833) 108 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:03.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:02 vm00 bash[22468]: cluster 2026-03-09T18:41:01.814522+0000 mgr.x (mgr.24833) 108 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:03.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:02 vm00 bash[17468]: cluster 2026-03-09T18:41:01.814522+0000 mgr.x (mgr.24833) 108 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:03.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:03 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:41:03] "GET /metrics HTTP/1.1" 200 37558 "" "Prometheus/2.51.0" 2026-03-09T18:41:05.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:04 vm00 bash[22468]: cluster 2026-03-09T18:41:03.815273+0000 mgr.x (mgr.24833) 109 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:41:05.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:04 vm00 bash[22468]: audit 2026-03-09T18:41:03.817215+0000 mgr.x (mgr.24833) 110 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:41:05.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:04 vm00 bash[17468]: cluster 2026-03-09T18:41:03.815273+0000 mgr.x (mgr.24833) 109 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:41:05.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:04 vm00 bash[17468]: audit 2026-03-09T18:41:03.817215+0000 mgr.x (mgr.24833) 110 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:41:05.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:04 vm08 bash[17774]: cluster 2026-03-09T18:41:03.815273+0000 mgr.x (mgr.24833) 109 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:41:05.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:04 vm08 bash[17774]: audit 2026-03-09T18:41:03.817215+0000 mgr.x (mgr.24833) 110 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:41:07.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:06 vm00 bash[22468]: cluster 2026-03-09T18:41:05.815757+0000 mgr.x (mgr.24833) 111 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:07.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:06 vm00 bash[22468]: audit 2026-03-09T18:41:06.212821+0000 mon.a (mon.0) 1071 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:41:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:06 vm00 bash[17468]: cluster 2026-03-09T18:41:05.815757+0000 mgr.x (mgr.24833) 111 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:06 vm00 bash[17468]: audit 2026-03-09T18:41:06.212821+0000 mon.a (mon.0) 1071 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:41:07.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:06 vm08 bash[17774]: cluster 2026-03-09T18:41:05.815757+0000 mgr.x (mgr.24833) 111 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:07.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:06 vm08 bash[17774]: audit 2026-03-09T18:41:06.212821+0000 mon.a (mon.0) 1071 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:41:09.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:08 vm08 bash[17774]: cluster 2026-03-09T18:41:07.816471+0000 mgr.x (mgr.24833) 112 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:41:09.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:08 vm00 bash[22468]: cluster 2026-03-09T18:41:07.816471+0000 mgr.x (mgr.24833) 112 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:41:09.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:08 vm00 bash[17468]: cluster 2026-03-09T18:41:07.816471+0000 mgr.x (mgr.24833) 112 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:41:11.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:10 vm08 bash[17774]: cluster 2026-03-09T18:41:09.816854+0000 mgr.x (mgr.24833) 113 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:11.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:10 vm00 bash[22468]: cluster 2026-03-09T18:41:09.816854+0000 mgr.x (mgr.24833) 113 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:11.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:10 vm00 bash[17468]: cluster 2026-03-09T18:41:09.816854+0000 mgr.x (mgr.24833) 113 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:12.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:12 vm08 bash[17774]: cluster 2026-03-09T18:41:11.817243+0000 mgr.x (mgr.24833) 114 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:13.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:12 vm00 bash[22468]: cluster 2026-03-09T18:41:11.817243+0000 mgr.x (mgr.24833) 114 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:13.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:12 vm00 bash[17468]: cluster 2026-03-09T18:41:11.817243+0000 mgr.x (mgr.24833) 114 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:13.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:13 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:41:13] "GET /metrics HTTP/1.1" 200 37556 "" "Prometheus/2.51.0" 2026-03-09T18:41:15.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:14 vm08 bash[17774]: cluster 2026-03-09T18:41:13.817985+0000 mgr.x (mgr.24833) 115 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:41:15.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:14 vm08 bash[17774]: audit 2026-03-09T18:41:13.828187+0000 mgr.x (mgr.24833) 116 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:41:15.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:14 vm00 bash[22468]: cluster 2026-03-09T18:41:13.817985+0000 mgr.x (mgr.24833) 115 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:41:15.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:14 vm00 bash[22468]: audit 2026-03-09T18:41:13.828187+0000 mgr.x (mgr.24833) 116 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:41:15.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:14 vm00 bash[17468]: cluster 2026-03-09T18:41:13.817985+0000 mgr.x (mgr.24833) 115 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:41:15.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:14 vm00 bash[17468]: audit 2026-03-09T18:41:13.828187+0000 mgr.x (mgr.24833) 116 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:41:17.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:16 vm08 bash[17774]: cluster 2026-03-09T18:41:15.818384+0000 mgr.x (mgr.24833) 117 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:17.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:16 vm00 bash[22468]: cluster 2026-03-09T18:41:15.818384+0000 mgr.x (mgr.24833) 117 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:17.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:16 vm00 bash[17468]: cluster 2026-03-09T18:41:15.818384+0000 mgr.x (mgr.24833) 117 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:19.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:18 vm08 bash[17774]: cluster 2026-03-09T18:41:17.818954+0000 mgr.x (mgr.24833) 118 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:41:19.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:18 vm00 bash[22468]: cluster 2026-03-09T18:41:17.818954+0000 mgr.x (mgr.24833) 118 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:41:19.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:18 vm00 bash[17468]: cluster 2026-03-09T18:41:17.818954+0000 mgr.x (mgr.24833) 118 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:41:20.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:19 vm08 bash[17774]: cluster 2026-03-09T18:41:19.819278+0000 mgr.x (mgr.24833) 119 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:20.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:19 vm00 bash[22468]: cluster 2026-03-09T18:41:19.819278+0000 mgr.x (mgr.24833) 119 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:20.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:19 vm00 bash[17468]: cluster 2026-03-09T18:41:19.819278+0000 mgr.x (mgr.24833) 119 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:21.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:21 vm00 bash[22468]: audit 2026-03-09T18:41:21.212650+0000 mon.a (mon.0) 1072 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:41:21.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:21 vm00 bash[17468]: audit 2026-03-09T18:41:21.212650+0000 mon.a (mon.0) 1072 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:41:21.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:21 vm08 bash[17774]: audit 2026-03-09T18:41:21.212650+0000 mon.a (mon.0) 1072 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:41:22.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:22 vm08 bash[17774]: cluster 2026-03-09T18:41:21.819661+0000 mgr.x (mgr.24833) 120 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:23.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:22 vm00 bash[22468]: cluster 2026-03-09T18:41:21.819661+0000 mgr.x (mgr.24833) 120 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:23.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:22 vm00 bash[17468]: cluster 2026-03-09T18:41:21.819661+0000 mgr.x (mgr.24833) 120 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:23.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:23 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:41:23] "GET /metrics HTTP/1.1" 200 37553 "" "Prometheus/2.51.0" 2026-03-09T18:41:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:24 vm00 bash[22468]: cluster 2026-03-09T18:41:23.820194+0000 mgr.x (mgr.24833) 121 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:41:25.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:24 vm00 bash[22468]: audit 2026-03-09T18:41:23.838845+0000 mgr.x (mgr.24833) 122 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:41:25.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:24 vm00 bash[17468]: cluster 2026-03-09T18:41:23.820194+0000 mgr.x (mgr.24833) 121 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:41:25.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:24 vm00 bash[17468]: audit 2026-03-09T18:41:23.838845+0000 mgr.x (mgr.24833) 122 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:41:25.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:24 vm08 bash[17774]: cluster 2026-03-09T18:41:23.820194+0000 mgr.x (mgr.24833) 121 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:41:25.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:24 vm08 bash[17774]: audit 2026-03-09T18:41:23.838845+0000 mgr.x (mgr.24833) 122 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:41:27.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:26 vm08 bash[17774]: cluster 2026-03-09T18:41:25.820550+0000 mgr.x (mgr.24833) 123 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:27.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:26 vm00 bash[22468]: cluster 2026-03-09T18:41:25.820550+0000 mgr.x (mgr.24833) 123 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:27.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:26 vm00 bash[17468]: cluster 2026-03-09T18:41:25.820550+0000 mgr.x (mgr.24833) 123 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:29.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:28 vm08 bash[17774]: cluster 2026-03-09T18:41:27.821188+0000 mgr.x (mgr.24833) 124 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:41:29.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:28 vm00 bash[22468]: cluster 2026-03-09T18:41:27.821188+0000 mgr.x (mgr.24833) 124 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:41:29.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:28 vm00 bash[17468]: cluster 2026-03-09T18:41:27.821188+0000 mgr.x (mgr.24833) 124 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:41:31.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:30 vm08 bash[17774]: cluster 2026-03-09T18:41:29.821520+0000 mgr.x (mgr.24833) 125 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:31.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:30 vm00 bash[22468]: cluster 2026-03-09T18:41:29.821520+0000 mgr.x (mgr.24833) 125 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:31.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:30 vm00 bash[17468]: cluster 2026-03-09T18:41:29.821520+0000 mgr.x (mgr.24833) 125 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:32.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:32 vm08 bash[17774]: cluster 2026-03-09T18:41:31.821846+0000 mgr.x (mgr.24833) 126 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:33.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:32 vm00 bash[22468]: cluster 2026-03-09T18:41:31.821846+0000 mgr.x (mgr.24833) 126 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:33.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:32 vm00 bash[17468]: cluster 2026-03-09T18:41:31.821846+0000 mgr.x (mgr.24833) 126 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:33.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:33 vm08 bash[36576]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:41:33] "GET /metrics HTTP/1.1" 200 37553 "" "Prometheus/2.51.0" 2026-03-09T18:41:35.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:34 vm08 bash[17774]: cluster 2026-03-09T18:41:33.822402+0000 mgr.x (mgr.24833) 127 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:41:35.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:34 vm08 bash[17774]: audit 2026-03-09T18:41:33.840188+0000 mgr.x (mgr.24833) 128 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:41:35.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:34 vm00 bash[22468]: cluster 2026-03-09T18:41:33.822402+0000 mgr.x (mgr.24833) 127 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:41:35.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:34 vm00 bash[22468]: audit 2026-03-09T18:41:33.840188+0000 mgr.x (mgr.24833) 128 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:41:35.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:34 vm00 bash[17468]: cluster 2026-03-09T18:41:33.822402+0000 mgr.x (mgr.24833) 127 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:41:35.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:34 vm00 bash[17468]: audit 2026-03-09T18:41:33.840188+0000 mgr.x (mgr.24833) 128 : audit [DBG] from='client.15135 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:41:36.302 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-09T18:41:36.836 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:41:36.836 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (11m) 2m ago 18m 14.5M - 0.25.0 c8568f914cd2 2a8a29ecee54 2026-03-09T18:41:36.836 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (11m) 2m ago 18m 39.3M - dad864ee21e9 b6a0baf6efb9 2026-03-09T18:41:36.836 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (2m) 2m ago 18m 66.2M - 3.5 e1d6a67b021e 8049f497913f 2026-03-09T18:41:36.836 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443,9283 running (13m) 2m ago 21m 517M - 19.2.3-678-ge911bdeb 654f31e6858e c24396cb6839 2026-03-09T18:41:36.836 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:8443,9283,8765 running (9m) 2m ago 22m 463M - 19.2.3-678-ge911bdeb 654f31e6858e 838266ced7b2 2026-03-09T18:41:36.836 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (22m) 2m ago 22m 64.1M 2048M 17.2.0 e1d6a67b021e 819e8890799a 2026-03-09T18:41:36.836 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (21m) 2m ago 21m 48.6M 2048M 17.2.0 e1d6a67b021e 5b51a6d0bbdd 2026-03-09T18:41:36.836 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (21m) 2m ago 21m 49.6M 2048M 17.2.0 e1d6a67b021e a82073bc5d9c 2026-03-09T18:41:36.836 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (11m) 2m ago 19m 7707k - 1.7.0 72c9c2088986 c2e3e3202fde 2026-03-09T18:41:36.836 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (11m) 2m ago 19m 7612k - 1.7.0 72c9c2088986 7a7d1ed8c801 2026-03-09T18:41:36.836 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (21m) 2m ago 21m 51.4M 4096M 17.2.0 e1d6a67b021e ab692a994bc3 2026-03-09T18:41:36.836 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (21m) 2m ago 21m 53.9M 4096M 17.2.0 e1d6a67b021e d8607e6df9d6 2026-03-09T18:41:36.836 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (20m) 2m ago 20m 48.1M 4096M 17.2.0 e1d6a67b021e 35e072ab4c22 2026-03-09T18:41:36.836 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (20m) 2m ago 20m 53.9M 4096M 17.2.0 e1d6a67b021e 306d680cc55b 2026-03-09T18:41:36.836 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (20m) 2m ago 20m 52.6M 4096M 17.2.0 e1d6a67b021e 781045b06a16 2026-03-09T18:41:36.836 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (19m) 2m ago 19m 52.1M 4096M 17.2.0 e1d6a67b021e 28b71e1b7c1b 2026-03-09T18:41:36.836 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (19m) 2m ago 19m 50.5M 4096M 17.2.0 e1d6a67b021e 80e1a58dd2f5 2026-03-09T18:41:36.836 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (19m) 2m ago 19m 51.5M 4096M 17.2.0 e1d6a67b021e 4f91765b51cf 2026-03-09T18:41:36.836 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (2m) 2m ago 18m 37.9M - 2.51.0 1d3b7f56885b f84283b91513 2026-03-09T18:41:36.836 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (18m) 2m ago 18m 87.2M - 17.2.0 e1d6a67b021e 671fa80b7e00 2026-03-09T18:41:36.836 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (18m) 2m ago 18m 87.9M - 17.2.0 e1d6a67b021e 1fbcce983317 2026-03-09T18:41:36.907 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ls' 2026-03-09T18:41:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:36 vm00 bash[22468]: cluster 2026-03-09T18:41:35.822811+0000 mgr.x (mgr.24833) 129 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:36 vm00 bash[22468]: audit 2026-03-09T18:41:36.213310+0000 mon.a (mon.0) 1073 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:41:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:36 vm00 bash[17468]: cluster 2026-03-09T18:41:35.822811+0000 mgr.x (mgr.24833) 129 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:36 vm00 bash[17468]: audit 2026-03-09T18:41:36.213310+0000 mon.a (mon.0) 1073 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:41:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:36 vm08 bash[17774]: cluster 2026-03-09T18:41:35.822811+0000 mgr.x (mgr.24833) 129 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:36 vm08 bash[17774]: audit 2026-03-09T18:41:36.213310+0000 mon.a (mon.0) 1073 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:41:37.388 INFO:teuthology.orchestra.run.vm00.stdout:NAME PORTS RUNNING REFRESHED AGE PLACEMENT 2026-03-09T18:41:37.388 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager ?:9093,9094 1/1 2m ago 19m vm00=a;count:1 2026-03-09T18:41:37.388 INFO:teuthology.orchestra.run.vm00.stdout:grafana ?:3000 1/1 2m ago 19m vm08=a;count:1 2026-03-09T18:41:37.388 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo ?:5000 1/1 2m ago 18m count:1 2026-03-09T18:41:37.388 INFO:teuthology.orchestra.run.vm00.stdout:mgr 2/2 2m ago 21m vm00=y;vm08=x;count:2 2026-03-09T18:41:37.388 INFO:teuthology.orchestra.run.vm00.stdout:mon 3/3 2m ago 21m vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm08:192.168.123.108=b;count:3 2026-03-09T18:41:37.388 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter ?:9100 2/2 2m ago 19m vm00=a;vm08=b;count:2 2026-03-09T18:41:37.388 INFO:teuthology.orchestra.run.vm00.stdout:osd 8 2m ago - 2026-03-09T18:41:37.388 INFO:teuthology.orchestra.run.vm00.stdout:prometheus ?:9095 1/1 2m ago 19m vm08=a;count:1 2026-03-09T18:41:37.388 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo ?:8000 2/2 2m ago 18m count:2 2026-03-09T18:41:37.476 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions' 2026-03-09T18:41:38.066 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:41:38.066 INFO:teuthology.orchestra.run.vm00.stdout: "mon": { 2026-03-09T18:41:38.066 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-09T18:41:38.066 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:41:38.066 INFO:teuthology.orchestra.run.vm00.stdout: "mgr": { 2026-03-09T18:41:38.066 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:41:38.066 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:41:38.066 INFO:teuthology.orchestra.run.vm00.stdout: "osd": { 2026-03-09T18:41:38.066 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-09T18:41:38.066 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:41:38.066 INFO:teuthology.orchestra.run.vm00.stdout: "mds": {}, 2026-03-09T18:41:38.066 INFO:teuthology.orchestra.run.vm00.stdout: "rgw": { 2026-03-09T18:41:38.066 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-09T18:41:38.067 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:41:38.067 INFO:teuthology.orchestra.run.vm00.stdout: "overall": { 2026-03-09T18:41:38.067 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 13, 2026-03-09T18:41:38.067 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:41:38.067 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:41:38.067 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:41:38.080 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:37 vm00 bash[22468]: audit 2026-03-09T18:41:36.832119+0000 mgr.x (mgr.24833) 130 : audit [DBG] from='client.25063 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:41:38.085 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:37 vm00 bash[17468]: audit 2026-03-09T18:41:36.832119+0000 mgr.x (mgr.24833) 130 : audit [DBG] from='client.25063 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:41:38.147 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mgr' 2026-03-09T18:41:38.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:37 vm08 bash[17774]: audit 2026-03-09T18:41:36.832119+0000 mgr.x (mgr.24833) 130 : audit [DBG] from='client.25063 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:41:39.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:38 vm08 bash[17774]: audit 2026-03-09T18:41:37.385904+0000 mgr.x (mgr.24833) 131 : audit [DBG] from='client.25066 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:41:39.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:38 vm08 bash[17774]: cluster 2026-03-09T18:41:37.823515+0000 mgr.x (mgr.24833) 132 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:41:39.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:38 vm08 bash[17774]: audit 2026-03-09T18:41:38.069848+0000 mon.a (mon.0) 1074 : audit [DBG] from='client.? 192.168.123.100:0/2152717701' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:41:39.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:38 vm00 bash[17468]: audit 2026-03-09T18:41:37.385904+0000 mgr.x (mgr.24833) 131 : audit [DBG] from='client.25066 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:41:39.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:38 vm00 bash[17468]: cluster 2026-03-09T18:41:37.823515+0000 mgr.x (mgr.24833) 132 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:41:39.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:38 vm00 bash[17468]: audit 2026-03-09T18:41:38.069848+0000 mon.a (mon.0) 1074 : audit [DBG] from='client.? 192.168.123.100:0/2152717701' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:41:39.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:38 vm00 bash[22468]: audit 2026-03-09T18:41:37.385904+0000 mgr.x (mgr.24833) 131 : audit [DBG] from='client.25066 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:41:39.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:38 vm00 bash[22468]: cluster 2026-03-09T18:41:37.823515+0000 mgr.x (mgr.24833) 132 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:41:39.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:38 vm00 bash[22468]: audit 2026-03-09T18:41:38.069848+0000 mon.a (mon.0) 1074 : audit [DBG] from='client.? 192.168.123.100:0/2152717701' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:41:40.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:39 vm08 bash[17774]: audit 2026-03-09T18:41:38.683155+0000 mgr.x (mgr.24833) 133 : audit [DBG] from='client.25075 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "mgr", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:41:40.247 INFO:teuthology.orchestra.run.vm00.stdout:Initiating upgrade to quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:41:40.261 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:39 vm00 bash[22468]: audit 2026-03-09T18:41:38.683155+0000 mgr.x (mgr.24833) 133 : audit [DBG] from='client.25075 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "mgr", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:41:40.261 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:39 vm00 bash[17468]: audit 2026-03-09T18:41:38.683155+0000 mgr.x (mgr.24833) 133 : audit [DBG] from='client.25075 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "mgr", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:41:40.324 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'while ceph orch upgrade status | jq '"'"'.in_progress'"'"' | grep true && ! ceph orch upgrade status | jq '"'"'.message'"'"' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done' 2026-03-09T18:41:40.902 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:41:41.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:40 vm08 bash[17774]: cluster 2026-03-09T18:41:39.823947+0000 mgr.x (mgr.24833) 134 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:41.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:40 vm08 bash[17774]: cephadm 2026-03-09T18:41:40.235339+0000 mgr.x (mgr.24833) 135 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:41:41.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:40 vm08 bash[17774]: audit 2026-03-09T18:41:40.247153+0000 mon.a (mon.0) 1075 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:41:41.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:40 vm08 bash[17774]: audit 2026-03-09T18:41:40.247704+0000 mon.a (mon.0) 1076 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:41:41.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:40 vm08 bash[17774]: audit 2026-03-09T18:41:40.250293+0000 mon.a (mon.0) 1077 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:41:41.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:40 vm08 bash[17774]: audit 2026-03-09T18:41:40.250887+0000 mon.a (mon.0) 1078 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:41:41.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:40 vm08 bash[17774]: audit 2026-03-09T18:41:40.259482+0000 mon.a (mon.0) 1079 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:41:41.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:40 vm08 bash[17774]: cephadm 2026-03-09T18:41:40.304581+0000 mgr.x (mgr.24833) 136 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:41:41.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:40 vm08 bash[17774]: audit 2026-03-09T18:41:40.889682+0000 mgr.x (mgr.24833) 137 : audit [DBG] from='client.25081 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:41:41.251 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:40 vm00 bash[17468]: cluster 2026-03-09T18:41:39.823947+0000 mgr.x (mgr.24833) 134 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:41.251 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:40 vm00 bash[17468]: cephadm 2026-03-09T18:41:40.235339+0000 mgr.x (mgr.24833) 135 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:41:41.251 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:40 vm00 bash[17468]: audit 2026-03-09T18:41:40.247153+0000 mon.a (mon.0) 1075 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:41:41.251 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:40 vm00 bash[17468]: audit 2026-03-09T18:41:40.247704+0000 mon.a (mon.0) 1076 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:41:41.251 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:40 vm00 bash[17468]: audit 2026-03-09T18:41:40.250293+0000 mon.a (mon.0) 1077 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:41:41.251 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:40 vm00 bash[17468]: audit 2026-03-09T18:41:40.250887+0000 mon.a (mon.0) 1078 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:41:41.251 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:40 vm00 bash[17468]: audit 2026-03-09T18:41:40.259482+0000 mon.a (mon.0) 1079 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:41:41.251 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:40 vm00 bash[17468]: cephadm 2026-03-09T18:41:40.304581+0000 mgr.x (mgr.24833) 136 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:41:41.251 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:40 vm00 bash[17468]: audit 2026-03-09T18:41:40.889682+0000 mgr.x (mgr.24833) 137 : audit [DBG] from='client.25081 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:41:41.251 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:40 vm00 bash[22468]: cluster 2026-03-09T18:41:39.823947+0000 mgr.x (mgr.24833) 134 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:41.251 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:40 vm00 bash[22468]: cephadm 2026-03-09T18:41:40.235339+0000 mgr.x (mgr.24833) 135 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:41:41.251 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:40 vm00 bash[22468]: audit 2026-03-09T18:41:40.247153+0000 mon.a (mon.0) 1075 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:41:41.251 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:40 vm00 bash[22468]: audit 2026-03-09T18:41:40.247704+0000 mon.a (mon.0) 1076 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:41:41.251 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:40 vm00 bash[22468]: audit 2026-03-09T18:41:40.250293+0000 mon.a (mon.0) 1077 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:41:41.251 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:40 vm00 bash[22468]: audit 2026-03-09T18:41:40.250887+0000 mon.a (mon.0) 1078 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:41:41.251 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:40 vm00 bash[22468]: audit 2026-03-09T18:41:40.259482+0000 mon.a (mon.0) 1079 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:41:41.251 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:40 vm00 bash[22468]: cephadm 2026-03-09T18:41:40.304581+0000 mgr.x (mgr.24833) 136 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:41:41.251 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:40 vm00 bash[22468]: audit 2026-03-09T18:41:40.889682+0000 mgr.x (mgr.24833) 137 : audit [DBG] from='client.25081 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:41:41.337 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:41:41.337 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (12m) 2m ago 18m 14.5M - 0.25.0 c8568f914cd2 2a8a29ecee54 2026-03-09T18:41:41.337 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (11m) 2m ago 18m 39.3M - dad864ee21e9 b6a0baf6efb9 2026-03-09T18:41:41.337 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (2m) 2m ago 18m 66.2M - 3.5 e1d6a67b021e 8049f497913f 2026-03-09T18:41:41.337 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443,9283 running (14m) 2m ago 21m 517M - 19.2.3-678-ge911bdeb 654f31e6858e c24396cb6839 2026-03-09T18:41:41.337 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:8443,9283,8765 running (9m) 2m ago 22m 463M - 19.2.3-678-ge911bdeb 654f31e6858e 838266ced7b2 2026-03-09T18:41:41.337 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (22m) 2m ago 22m 64.1M 2048M 17.2.0 e1d6a67b021e 819e8890799a 2026-03-09T18:41:41.337 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (21m) 2m ago 21m 48.6M 2048M 17.2.0 e1d6a67b021e 5b51a6d0bbdd 2026-03-09T18:41:41.337 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (21m) 2m ago 21m 49.6M 2048M 17.2.0 e1d6a67b021e a82073bc5d9c 2026-03-09T18:41:41.337 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (11m) 2m ago 19m 7707k - 1.7.0 72c9c2088986 c2e3e3202fde 2026-03-09T18:41:41.337 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (11m) 2m ago 19m 7612k - 1.7.0 72c9c2088986 7a7d1ed8c801 2026-03-09T18:41:41.337 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (21m) 2m ago 21m 51.4M 4096M 17.2.0 e1d6a67b021e ab692a994bc3 2026-03-09T18:41:41.337 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (21m) 2m ago 21m 53.9M 4096M 17.2.0 e1d6a67b021e d8607e6df9d6 2026-03-09T18:41:41.337 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (20m) 2m ago 20m 48.1M 4096M 17.2.0 e1d6a67b021e 35e072ab4c22 2026-03-09T18:41:41.337 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (20m) 2m ago 20m 53.9M 4096M 17.2.0 e1d6a67b021e 306d680cc55b 2026-03-09T18:41:41.337 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (20m) 2m ago 20m 52.6M 4096M 17.2.0 e1d6a67b021e 781045b06a16 2026-03-09T18:41:41.337 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (20m) 2m ago 20m 52.1M 4096M 17.2.0 e1d6a67b021e 28b71e1b7c1b 2026-03-09T18:41:41.337 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (19m) 2m ago 19m 50.5M 4096M 17.2.0 e1d6a67b021e 80e1a58dd2f5 2026-03-09T18:41:41.337 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (19m) 2m ago 19m 51.5M 4096M 17.2.0 e1d6a67b021e 4f91765b51cf 2026-03-09T18:41:41.337 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (2m) 2m ago 19m 37.9M - 2.51.0 1d3b7f56885b f84283b91513 2026-03-09T18:41:41.337 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (18m) 2m ago 18m 87.2M - 17.2.0 e1d6a67b021e 671fa80b7e00 2026-03-09T18:41:41.337 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (18m) 2m ago 18m 87.9M - 17.2.0 e1d6a67b021e 1fbcce983317 2026-03-09T18:41:41.612 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:41:41.612 INFO:teuthology.orchestra.run.vm00.stdout: "mon": { 2026-03-09T18:41:41.612 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-09T18:41:41.612 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:41:41.612 INFO:teuthology.orchestra.run.vm00.stdout: "mgr": { 2026-03-09T18:41:41.612 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:41:41.612 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:41:41.612 INFO:teuthology.orchestra.run.vm00.stdout: "osd": { 2026-03-09T18:41:41.612 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-09T18:41:41.612 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:41:41.612 INFO:teuthology.orchestra.run.vm00.stdout: "mds": {}, 2026-03-09T18:41:41.612 INFO:teuthology.orchestra.run.vm00.stdout: "rgw": { 2026-03-09T18:41:41.612 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-09T18:41:41.612 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:41:41.612 INFO:teuthology.orchestra.run.vm00.stdout: "overall": { 2026-03-09T18:41:41.612 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 13, 2026-03-09T18:41:41.612 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:41:41.612 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:41:41.612 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:41:41.908 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:41:41.909 INFO:teuthology.orchestra.run.vm00.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T18:41:41.909 INFO:teuthology.orchestra.run.vm00.stdout: "in_progress": true, 2026-03-09T18:41:41.909 INFO:teuthology.orchestra.run.vm00.stdout: "which": "Upgrading daemons of type(s) mgr", 2026-03-09T18:41:41.909 INFO:teuthology.orchestra.run.vm00.stdout: "services_complete": [ 2026-03-09T18:41:41.909 INFO:teuthology.orchestra.run.vm00.stdout: "mgr" 2026-03-09T18:41:41.909 INFO:teuthology.orchestra.run.vm00.stdout: ], 2026-03-09T18:41:41.909 INFO:teuthology.orchestra.run.vm00.stdout: "progress": "2/2 daemons upgraded", 2026-03-09T18:41:41.909 INFO:teuthology.orchestra.run.vm00.stdout: "message": "Doing first pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df image", 2026-03-09T18:41:41.909 INFO:teuthology.orchestra.run.vm00.stdout: "is_paused": false 2026-03-09T18:41:41.909 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:41:42.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:42 vm00 bash[17468]: audit 2026-03-09T18:41:41.116869+0000 mgr.x (mgr.24833) 138 : audit [DBG] from='client.25087 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:41:42.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:42 vm00 bash[17468]: audit 2026-03-09T18:41:41.331087+0000 mgr.x (mgr.24833) 139 : audit [DBG] from='client.25093 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:41:42.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:42 vm00 bash[17468]: audit 2026-03-09T18:41:41.615540+0000 mon.a (mon.0) 1080 : audit [DBG] from='client.? 192.168.123.100:0/1834249395' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:41:42.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:42 vm00 bash[17468]: cluster 2026-03-09T18:41:41.824290+0000 mgr.x (mgr.24833) 140 : cluster [DBG] pgmap v102: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:42.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:42 vm00 bash[17468]: audit 2026-03-09T18:41:41.908589+0000 mgr.x (mgr.24833) 141 : audit [DBG] from='client.25102 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:41:42.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:42 vm00 bash[17468]: cephadm 2026-03-09T18:41:41.914475+0000 mgr.x (mgr.24833) 142 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:41:42.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:42 vm00 bash[17468]: cephadm 2026-03-09T18:41:41.914538+0000 mgr.x (mgr.24833) 143 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:41:42.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:42 vm00 bash[17468]: audit 2026-03-09T18:41:41.916206+0000 mon.a (mon.0) 1081 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:41:42.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:42 vm00 bash[17468]: cephadm 2026-03-09T18:41:41.916838+0000 mgr.x (mgr.24833) 144 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-09T18:41:42.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:42 vm00 bash[17468]: cephadm 2026-03-09T18:41:41.917384+0000 mgr.x (mgr.24833) 145 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-09T18:41:42.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:42 vm00 bash[17468]: audit 2026-03-09T18:41:41.918841+0000 mon.a (mon.0) 1082 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:41:42.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:42 vm00 bash[17468]: cephadm 2026-03-09T18:41:41.919633+0000 mgr.x (mgr.24833) 146 : cephadm [INF] Failing over to other MGR 2026-03-09T18:41:42.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:42 vm00 bash[17468]: audit 2026-03-09T18:41:41.922944+0000 mon.a (mon.0) 1083 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-09T18:41:42.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:42 vm00 bash[17468]: cluster 2026-03-09T18:41:41.929231+0000 mon.a (mon.0) 1084 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T18:41:42.251 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:41 vm00 bash[22468]: audit 2026-03-09T18:41:41.116869+0000 mgr.x (mgr.24833) 138 : audit [DBG] from='client.25087 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:41:42.251 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:41 vm00 bash[22468]: audit 2026-03-09T18:41:41.331087+0000 mgr.x (mgr.24833) 139 : audit [DBG] from='client.25093 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:41:42.251 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:41 vm00 bash[22468]: audit 2026-03-09T18:41:41.615540+0000 mon.a (mon.0) 1080 : audit [DBG] from='client.? 192.168.123.100:0/1834249395' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:41:42.251 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:41 vm00 bash[22468]: cluster 2026-03-09T18:41:41.824290+0000 mgr.x (mgr.24833) 140 : cluster [DBG] pgmap v102: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:42.251 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:41 vm00 bash[22468]: audit 2026-03-09T18:41:41.908589+0000 mgr.x (mgr.24833) 141 : audit [DBG] from='client.25102 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:41:42.251 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:41 vm00 bash[22468]: cephadm 2026-03-09T18:41:41.914475+0000 mgr.x (mgr.24833) 142 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:41:42.251 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:41 vm00 bash[22468]: cephadm 2026-03-09T18:41:41.914538+0000 mgr.x (mgr.24833) 143 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:41:42.251 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:41 vm00 bash[22468]: audit 2026-03-09T18:41:41.916206+0000 mon.a (mon.0) 1081 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:41:42.251 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:41 vm00 bash[22468]: cephadm 2026-03-09T18:41:41.916838+0000 mgr.x (mgr.24833) 144 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-09T18:41:42.251 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:41 vm00 bash[22468]: cephadm 2026-03-09T18:41:41.917384+0000 mgr.x (mgr.24833) 145 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-09T18:41:42.251 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:41 vm00 bash[22468]: audit 2026-03-09T18:41:41.918841+0000 mon.a (mon.0) 1082 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:41:42.251 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:41 vm00 bash[22468]: cephadm 2026-03-09T18:41:41.919633+0000 mgr.x (mgr.24833) 146 : cephadm [INF] Failing over to other MGR 2026-03-09T18:41:42.251 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:42 vm00 bash[22468]: audit 2026-03-09T18:41:41.922944+0000 mon.a (mon.0) 1083 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-09T18:41:42.251 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:42 vm00 bash[22468]: cluster 2026-03-09T18:41:41.929231+0000 mon.a (mon.0) 1084 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T18:41:42.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:41 vm08 bash[17774]: audit 2026-03-09T18:41:41.116869+0000 mgr.x (mgr.24833) 138 : audit [DBG] from='client.25087 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:41:42.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:41 vm08 bash[17774]: audit 2026-03-09T18:41:41.331087+0000 mgr.x (mgr.24833) 139 : audit [DBG] from='client.25093 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:41:42.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:42 vm08 bash[17774]: audit 2026-03-09T18:41:41.615540+0000 mon.a (mon.0) 1080 : audit [DBG] from='client.? 192.168.123.100:0/1834249395' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:41:42.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:42 vm08 bash[17774]: cluster 2026-03-09T18:41:41.824290+0000 mgr.x (mgr.24833) 140 : cluster [DBG] pgmap v102: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:41:42.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:42 vm08 bash[17774]: audit 2026-03-09T18:41:41.908589+0000 mgr.x (mgr.24833) 141 : audit [DBG] from='client.25102 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:41:42.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:42 vm08 bash[17774]: cephadm 2026-03-09T18:41:41.914475+0000 mgr.x (mgr.24833) 142 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:41:42.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:42 vm08 bash[17774]: cephadm 2026-03-09T18:41:41.914538+0000 mgr.x (mgr.24833) 143 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:41:42.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:42 vm08 bash[17774]: audit 2026-03-09T18:41:41.916206+0000 mon.a (mon.0) 1081 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' 2026-03-09T18:41:42.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:42 vm08 bash[17774]: cephadm 2026-03-09T18:41:41.916838+0000 mgr.x (mgr.24833) 144 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-09T18:41:42.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:42 vm08 bash[17774]: cephadm 2026-03-09T18:41:41.917384+0000 mgr.x (mgr.24833) 145 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-09T18:41:42.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:42 vm08 bash[17774]: audit 2026-03-09T18:41:41.918841+0000 mon.a (mon.0) 1082 : audit [DBG] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:41:42.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:42 vm08 bash[17774]: cephadm 2026-03-09T18:41:41.919633+0000 mgr.x (mgr.24833) 146 : cephadm [INF] Failing over to other MGR 2026-03-09T18:41:42.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:42 vm08 bash[17774]: audit 2026-03-09T18:41:41.922944+0000 mon.a (mon.0) 1083 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-09T18:41:42.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:42 vm08 bash[17774]: cluster 2026-03-09T18:41:41.929231+0000 mon.a (mon.0) 1084 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T18:41:43.211 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:42 vm08 bash[36576]: ignoring --setuser ceph since I am not root 2026-03-09T18:41:43.211 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:42 vm08 bash[36576]: ignoring --setgroup ceph since I am not root 2026-03-09T18:41:43.211 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:42 vm08 bash[36576]: debug 2026-03-09T18:41:42.975+0000 7fe4790c4640 1 -- 192.168.123.108:0/3217554249 <== mon.2 v2:192.168.123.108:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x55c9110d44e0 con 0x55c9110b3800 2026-03-09T18:41:43.211 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:43 vm08 bash[36576]: debug 2026-03-09T18:41:43.039+0000 7fe47b921140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T18:41:43.211 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:43 vm08 bash[36576]: debug 2026-03-09T18:41:43.075+0000 7fe47b921140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T18:41:43.272 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:41:42 vm00 bash[53976]: [09/Mar/2026:18:41:42] ENGINE Bus STOPPING 2026-03-09T18:41:43.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:43 vm08 bash[36576]: debug 2026-03-09T18:41:43.207+0000 7fe47b921140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T18:41:43.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:41:43 vm00 bash[53976]: [09/Mar/2026:18:41:43] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T18:41:43.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:41:43 vm00 bash[53976]: [09/Mar/2026:18:41:43] ENGINE Bus STOPPED 2026-03-09T18:41:43.538 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:41:43 vm00 bash[53976]: [09/Mar/2026:18:41:43] ENGINE Bus STARTING 2026-03-09T18:41:43.780 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:43 vm08 bash[36576]: debug 2026-03-09T18:41:43.527+0000 7fe47b921140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T18:41:43.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:41:43 vm00 bash[53976]: [09/Mar/2026:18:41:43] ENGINE Serving on http://:::9283 2026-03-09T18:41:43.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:41:43 vm00 bash[53976]: [09/Mar/2026:18:41:43] ENGINE Bus STARTED 2026-03-09T18:41:44.049 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:43 vm08 bash[17774]: audit 2026-03-09T18:41:42.926638+0000 mon.a (mon.0) 1085 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd='[{"prefix": "mgr fail", "who": "x"}]': finished 2026-03-09T18:41:44.050 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:43 vm08 bash[17774]: cluster 2026-03-09T18:41:42.926686+0000 mon.a (mon.0) 1086 : cluster [DBG] mgrmap e37: y(active, starting, since 1.00258s) 2026-03-09T18:41:44.050 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:43 vm08 bash[17774]: audit 2026-03-09T18:41:42.931986+0000 mon.a (mon.0) 1087 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:41:44.050 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:43 vm08 bash[17774]: audit 2026-03-09T18:41:42.932045+0000 mon.a (mon.0) 1088 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:41:44.050 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:43 vm08 bash[17774]: audit 2026-03-09T18:41:42.932095+0000 mon.a (mon.0) 1089 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:41:44.050 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:43 vm08 bash[17774]: audit 2026-03-09T18:41:42.934103+0000 mon.a (mon.0) 1090 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:41:44.050 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:43 vm08 bash[17774]: audit 2026-03-09T18:41:42.938345+0000 mon.a (mon.0) 1091 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:41:44.050 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:43 vm08 bash[17774]: audit 2026-03-09T18:41:42.938402+0000 mon.a (mon.0) 1092 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:41:44.050 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:43 vm08 bash[17774]: audit 2026-03-09T18:41:42.938863+0000 mon.a (mon.0) 1093 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:41:44.050 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:43 vm08 bash[17774]: audit 2026-03-09T18:41:42.939122+0000 mon.a (mon.0) 1094 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:41:44.050 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:43 vm08 bash[17774]: audit 2026-03-09T18:41:42.939210+0000 mon.a (mon.0) 1095 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:41:44.050 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:43 vm08 bash[17774]: audit 2026-03-09T18:41:42.939275+0000 mon.a (mon.0) 1096 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:41:44.050 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:43 vm08 bash[17774]: audit 2026-03-09T18:41:42.939333+0000 mon.a (mon.0) 1097 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:41:44.050 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:43 vm08 bash[17774]: audit 2026-03-09T18:41:42.939390+0000 mon.a (mon.0) 1098 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:41:44.050 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:43 vm08 bash[17774]: audit 2026-03-09T18:41:42.939448+0000 mon.a (mon.0) 1099 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:41:44.050 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:43 vm08 bash[17774]: audit 2026-03-09T18:41:42.939729+0000 mon.a (mon.0) 1100 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:41:44.050 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:43 vm08 bash[17774]: audit 2026-03-09T18:41:42.939891+0000 mon.a (mon.0) 1101 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:41:44.050 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:43 vm08 bash[17774]: cluster 2026-03-09T18:41:43.284700+0000 mon.a (mon.0) 1102 : cluster [INF] Manager daemon y is now available 2026-03-09T18:41:44.050 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:43 vm08 bash[17774]: audit 2026-03-09T18:41:43.320759+0000 mon.a (mon.0) 1103 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:41:44.050 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:43 vm08 bash[17774]: audit 2026-03-09T18:41:43.321306+0000 mon.a (mon.0) 1104 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:41:44.050 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:43 vm08 bash[17774]: audit 2026-03-09T18:41:43.330114+0000 mon.a (mon.0) 1105 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:41:44.050 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:43 vm08 bash[17774]: audit 2026-03-09T18:41:43.361608+0000 mon.a (mon.0) 1106 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:43 vm00 bash[22468]: audit 2026-03-09T18:41:42.926638+0000 mon.a (mon.0) 1085 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd='[{"prefix": "mgr fail", "who": "x"}]': finished 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:43 vm00 bash[22468]: cluster 2026-03-09T18:41:42.926686+0000 mon.a (mon.0) 1086 : cluster [DBG] mgrmap e37: y(active, starting, since 1.00258s) 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:43 vm00 bash[22468]: audit 2026-03-09T18:41:42.931986+0000 mon.a (mon.0) 1087 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:43 vm00 bash[22468]: audit 2026-03-09T18:41:42.932045+0000 mon.a (mon.0) 1088 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:43 vm00 bash[22468]: audit 2026-03-09T18:41:42.932095+0000 mon.a (mon.0) 1089 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:43 vm00 bash[22468]: audit 2026-03-09T18:41:42.934103+0000 mon.a (mon.0) 1090 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:43 vm00 bash[22468]: audit 2026-03-09T18:41:42.938345+0000 mon.a (mon.0) 1091 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:43 vm00 bash[22468]: audit 2026-03-09T18:41:42.938402+0000 mon.a (mon.0) 1092 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:43 vm00 bash[22468]: audit 2026-03-09T18:41:42.938863+0000 mon.a (mon.0) 1093 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:43 vm00 bash[22468]: audit 2026-03-09T18:41:42.939122+0000 mon.a (mon.0) 1094 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:43 vm00 bash[22468]: audit 2026-03-09T18:41:42.939210+0000 mon.a (mon.0) 1095 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:43 vm00 bash[22468]: audit 2026-03-09T18:41:42.939275+0000 mon.a (mon.0) 1096 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:43 vm00 bash[22468]: audit 2026-03-09T18:41:42.939333+0000 mon.a (mon.0) 1097 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:43 vm00 bash[22468]: audit 2026-03-09T18:41:42.939390+0000 mon.a (mon.0) 1098 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:43 vm00 bash[22468]: audit 2026-03-09T18:41:42.939448+0000 mon.a (mon.0) 1099 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:43 vm00 bash[22468]: audit 2026-03-09T18:41:42.939729+0000 mon.a (mon.0) 1100 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:43 vm00 bash[22468]: audit 2026-03-09T18:41:42.939891+0000 mon.a (mon.0) 1101 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:43 vm00 bash[22468]: cluster 2026-03-09T18:41:43.284700+0000 mon.a (mon.0) 1102 : cluster [INF] Manager daemon y is now available 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:43 vm00 bash[22468]: audit 2026-03-09T18:41:43.320759+0000 mon.a (mon.0) 1103 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:43 vm00 bash[22468]: audit 2026-03-09T18:41:43.321306+0000 mon.a (mon.0) 1104 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:43 vm00 bash[22468]: audit 2026-03-09T18:41:43.330114+0000 mon.a (mon.0) 1105 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:43 vm00 bash[22468]: audit 2026-03-09T18:41:43.361608+0000 mon.a (mon.0) 1106 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:43 vm00 bash[17468]: audit 2026-03-09T18:41:42.926638+0000 mon.a (mon.0) 1085 : audit [INF] from='mgr.24833 192.168.123.108:0/941516042' entity='mgr.x' cmd='[{"prefix": "mgr fail", "who": "x"}]': finished 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:43 vm00 bash[17468]: cluster 2026-03-09T18:41:42.926686+0000 mon.a (mon.0) 1086 : cluster [DBG] mgrmap e37: y(active, starting, since 1.00258s) 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:43 vm00 bash[17468]: audit 2026-03-09T18:41:42.931986+0000 mon.a (mon.0) 1087 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:43 vm00 bash[17468]: audit 2026-03-09T18:41:42.932045+0000 mon.a (mon.0) 1088 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:43 vm00 bash[17468]: audit 2026-03-09T18:41:42.932095+0000 mon.a (mon.0) 1089 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:43 vm00 bash[17468]: audit 2026-03-09T18:41:42.934103+0000 mon.a (mon.0) 1090 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:43 vm00 bash[17468]: audit 2026-03-09T18:41:42.938345+0000 mon.a (mon.0) 1091 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:43 vm00 bash[17468]: audit 2026-03-09T18:41:42.938402+0000 mon.a (mon.0) 1092 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:43 vm00 bash[17468]: audit 2026-03-09T18:41:42.938863+0000 mon.a (mon.0) 1093 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:43 vm00 bash[17468]: audit 2026-03-09T18:41:42.939122+0000 mon.a (mon.0) 1094 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:43 vm00 bash[17468]: audit 2026-03-09T18:41:42.939210+0000 mon.a (mon.0) 1095 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:41:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:43 vm00 bash[17468]: audit 2026-03-09T18:41:42.939275+0000 mon.a (mon.0) 1096 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:41:44.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:43 vm00 bash[17468]: audit 2026-03-09T18:41:42.939333+0000 mon.a (mon.0) 1097 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:41:44.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:43 vm00 bash[17468]: audit 2026-03-09T18:41:42.939390+0000 mon.a (mon.0) 1098 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:41:44.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:43 vm00 bash[17468]: audit 2026-03-09T18:41:42.939448+0000 mon.a (mon.0) 1099 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:41:44.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:43 vm00 bash[17468]: audit 2026-03-09T18:41:42.939729+0000 mon.a (mon.0) 1100 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:41:44.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:43 vm00 bash[17468]: audit 2026-03-09T18:41:42.939891+0000 mon.a (mon.0) 1101 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:41:44.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:43 vm00 bash[17468]: cluster 2026-03-09T18:41:43.284700+0000 mon.a (mon.0) 1102 : cluster [INF] Manager daemon y is now available 2026-03-09T18:41:44.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:43 vm00 bash[17468]: audit 2026-03-09T18:41:43.320759+0000 mon.a (mon.0) 1103 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:41:44.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:43 vm00 bash[17468]: audit 2026-03-09T18:41:43.321306+0000 mon.a (mon.0) 1104 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:41:44.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:43 vm00 bash[17468]: audit 2026-03-09T18:41:43.330114+0000 mon.a (mon.0) 1105 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:41:44.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:43 vm00 bash[17468]: audit 2026-03-09T18:41:43.361608+0000 mon.a (mon.0) 1106 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:41:44.398 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:44 vm08 bash[36576]: debug 2026-03-09T18:41:44.047+0000 7fe47b921140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T18:41:44.398 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:44 vm08 bash[36576]: debug 2026-03-09T18:41:44.139+0000 7fe47b921140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T18:41:44.398 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:44 vm08 bash[36576]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T18:41:44.398 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:44 vm08 bash[36576]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T18:41:44.398 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:44 vm08 bash[36576]: from numpy import show_config as show_numpy_config 2026-03-09T18:41:44.398 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:44 vm08 bash[36576]: debug 2026-03-09T18:41:44.271+0000 7fe47b921140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T18:41:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:44 vm08 bash[42014]: ts=2026-03-09T18:41:44.398Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nfs msg="Unable to refresh target groups" err="Get \"http://192.168.123.108:8765/sd/prometheus/sd-config?service=nfs\": dial tcp 192.168.123.108:8765: connect: connection refused" 2026-03-09T18:41:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:44 vm08 bash[42014]: ts=2026-03-09T18:41:44.398Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=node msg="Unable to refresh target groups" err="Get \"http://192.168.123.108:8765/sd/prometheus/sd-config?service=node-exporter\": dial tcp 192.168.123.108:8765: connect: connection refused" 2026-03-09T18:41:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:44 vm08 bash[42014]: ts=2026-03-09T18:41:44.398Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph msg="Unable to refresh target groups" err="Get \"http://192.168.123.108:8765/sd/prometheus/sd-config?service=mgr-prometheus\": dial tcp 192.168.123.108:8765: connect: connection refused" 2026-03-09T18:41:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:44 vm08 bash[42014]: ts=2026-03-09T18:41:44.398Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph-exporter msg="Unable to refresh target groups" err="Get \"http://192.168.123.108:8765/sd/prometheus/sd-config?service=ceph-exporter\": dial tcp 192.168.123.108:8765: connect: connection refused" 2026-03-09T18:41:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:44 vm08 bash[42014]: ts=2026-03-09T18:41:44.400Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nvmeof msg="Unable to refresh target groups" err="Get \"http://192.168.123.108:8765/sd/prometheus/sd-config?service=nvmeof\": dial tcp 192.168.123.108:8765: connect: connection refused" 2026-03-09T18:41:44.724 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:44 vm08 bash[42014]: ts=2026-03-09T18:41:44.401Z caller=refresh.go:90 level=error component="discovery manager notify" discovery=http config=config-0 msg="Unable to refresh target groups" err="Get \"http://192.168.123.108:8765/sd/prometheus/sd-config?service=alertmanager\": dial tcp 192.168.123.108:8765: connect: connection refused" 2026-03-09T18:41:44.725 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:44 vm08 bash[36576]: debug 2026-03-09T18:41:44.427+0000 7fe47b921140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T18:41:44.725 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:44 vm08 bash[36576]: debug 2026-03-09T18:41:44.467+0000 7fe47b921140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T18:41:44.725 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:44 vm08 bash[36576]: debug 2026-03-09T18:41:44.507+0000 7fe47b921140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T18:41:44.725 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:44 vm08 bash[36576]: debug 2026-03-09T18:41:44.555+0000 7fe47b921140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T18:41:44.725 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:44 vm08 bash[36576]: debug 2026-03-09T18:41:44.615+0000 7fe47b921140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T18:41:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:44 vm08 bash[17774]: cluster 2026-03-09T18:41:43.967649+0000 mon.a (mon.0) 1107 : cluster [DBG] mgrmap e38: y(active, since 2s) 2026-03-09T18:41:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:44 vm08 bash[17774]: cephadm 2026-03-09T18:41:44.448162+0000 mgr.y (mgr.24991) 3 : cephadm [INF] [09/Mar/2026:18:41:44] ENGINE Bus STARTING 2026-03-09T18:41:45.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:44 vm08 bash[17774]: cephadm 2026-03-09T18:41:44.556628+0000 mgr.y (mgr.24991) 4 : cephadm [INF] [09/Mar/2026:18:41:44] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T18:41:45.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:44 vm08 bash[17774]: cephadm 2026-03-09T18:41:44.558022+0000 mgr.y (mgr.24991) 5 : cephadm [INF] [09/Mar/2026:18:41:44] ENGINE Client ('192.168.123.100', 55672) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:41:45.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:44 vm08 bash[17774]: cephadm 2026-03-09T18:41:44.658153+0000 mgr.y (mgr.24991) 6 : cephadm [INF] [09/Mar/2026:18:41:44] ENGINE Serving on http://192.168.123.100:8765 2026-03-09T18:41:45.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:44 vm08 bash[17774]: cephadm 2026-03-09T18:41:44.658417+0000 mgr.y (mgr.24991) 7 : cephadm [INF] [09/Mar/2026:18:41:44] ENGINE Bus STARTED 2026-03-09T18:41:45.225 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:45 vm08 bash[36576]: debug 2026-03-09T18:41:45.135+0000 7fe47b921140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T18:41:45.225 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:45 vm08 bash[36576]: debug 2026-03-09T18:41:45.183+0000 7fe47b921140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T18:41:45.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:44 vm00 bash[22468]: cluster 2026-03-09T18:41:43.967649+0000 mon.a (mon.0) 1107 : cluster [DBG] mgrmap e38: y(active, since 2s) 2026-03-09T18:41:45.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:44 vm00 bash[22468]: cephadm 2026-03-09T18:41:44.448162+0000 mgr.y (mgr.24991) 3 : cephadm [INF] [09/Mar/2026:18:41:44] ENGINE Bus STARTING 2026-03-09T18:41:45.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:44 vm00 bash[22468]: cephadm 2026-03-09T18:41:44.556628+0000 mgr.y (mgr.24991) 4 : cephadm [INF] [09/Mar/2026:18:41:44] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T18:41:45.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:44 vm00 bash[22468]: cephadm 2026-03-09T18:41:44.558022+0000 mgr.y (mgr.24991) 5 : cephadm [INF] [09/Mar/2026:18:41:44] ENGINE Client ('192.168.123.100', 55672) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:41:45.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:44 vm00 bash[22468]: cephadm 2026-03-09T18:41:44.658153+0000 mgr.y (mgr.24991) 6 : cephadm [INF] [09/Mar/2026:18:41:44] ENGINE Serving on http://192.168.123.100:8765 2026-03-09T18:41:45.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:44 vm00 bash[22468]: cephadm 2026-03-09T18:41:44.658417+0000 mgr.y (mgr.24991) 7 : cephadm [INF] [09/Mar/2026:18:41:44] ENGINE Bus STARTED 2026-03-09T18:41:45.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:44 vm00 bash[17468]: cluster 2026-03-09T18:41:43.967649+0000 mon.a (mon.0) 1107 : cluster [DBG] mgrmap e38: y(active, since 2s) 2026-03-09T18:41:45.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:44 vm00 bash[17468]: cephadm 2026-03-09T18:41:44.448162+0000 mgr.y (mgr.24991) 3 : cephadm [INF] [09/Mar/2026:18:41:44] ENGINE Bus STARTING 2026-03-09T18:41:45.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:44 vm00 bash[17468]: cephadm 2026-03-09T18:41:44.556628+0000 mgr.y (mgr.24991) 4 : cephadm [INF] [09/Mar/2026:18:41:44] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T18:41:45.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:44 vm00 bash[17468]: cephadm 2026-03-09T18:41:44.558022+0000 mgr.y (mgr.24991) 5 : cephadm [INF] [09/Mar/2026:18:41:44] ENGINE Client ('192.168.123.100', 55672) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:41:45.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:44 vm00 bash[17468]: cephadm 2026-03-09T18:41:44.658153+0000 mgr.y (mgr.24991) 6 : cephadm [INF] [09/Mar/2026:18:41:44] ENGINE Serving on http://192.168.123.100:8765 2026-03-09T18:41:45.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:44 vm00 bash[17468]: cephadm 2026-03-09T18:41:44.658417+0000 mgr.y (mgr.24991) 7 : cephadm [INF] [09/Mar/2026:18:41:44] ENGINE Bus STARTED 2026-03-09T18:41:45.490 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:45 vm08 bash[36576]: debug 2026-03-09T18:41:45.227+0000 7fe47b921140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T18:41:45.490 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:45 vm08 bash[36576]: debug 2026-03-09T18:41:45.383+0000 7fe47b921140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T18:41:45.490 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:45 vm08 bash[36576]: debug 2026-03-09T18:41:45.435+0000 7fe47b921140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T18:41:45.490 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:45 vm08 bash[36576]: debug 2026-03-09T18:41:45.487+0000 7fe47b921140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T18:41:45.797 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:45 vm08 bash[36576]: debug 2026-03-09T18:41:45.615+0000 7fe47b921140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:41:45.797 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:45 vm08 bash[36576]: debug 2026-03-09T18:41:45.795+0000 7fe47b921140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T18:41:46.099 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:45 vm08 bash[17774]: cluster 2026-03-09T18:41:44.938411+0000 mgr.y (mgr.24991) 8 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:41:46.099 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:46 vm08 bash[36576]: debug 2026-03-09T18:41:45.995+0000 7fe47b921140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T18:41:46.099 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:46 vm08 bash[36576]: debug 2026-03-09T18:41:46.043+0000 7fe47b921140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T18:41:46.099 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:46 vm08 bash[36576]: debug 2026-03-09T18:41:46.095+0000 7fe47b921140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T18:41:46.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:45 vm00 bash[22468]: cluster 2026-03-09T18:41:44.938411+0000 mgr.y (mgr.24991) 8 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:41:46.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:45 vm00 bash[17468]: cluster 2026-03-09T18:41:44.938411+0000 mgr.y (mgr.24991) 8 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:41:46.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:46 vm08 bash[36576]: debug 2026-03-09T18:41:46.287+0000 7fe47b921140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:41:46.968 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:46 vm08 bash[17774]: cluster 2026-03-09T18:41:45.969246+0000 mon.a (mon.0) 1108 : cluster [DBG] mgrmap e39: y(active, since 4s) 2026-03-09T18:41:46.969 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:46 vm08 bash[17774]: audit 2026-03-09T18:41:46.568375+0000 mon.b (mon.2) 135 : audit [DBG] from='mgr.? 192.168.123.108:0/2437113709' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:41:46.969 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:46 vm08 bash[36576]: debug 2026-03-09T18:41:46.559+0000 7fe47b921140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T18:41:46.969 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:46 vm08 bash[36576]: [09/Mar/2026:18:41:46] ENGINE Bus STARTING 2026-03-09T18:41:46.969 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:46 vm08 bash[36576]: CherryPy Checker: 2026-03-09T18:41:46.969 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:46 vm08 bash[36576]: The Application mounted at '' has an empty config. 2026-03-09T18:41:46.969 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:46 vm08 bash[36576]: [09/Mar/2026:18:41:46] ENGINE Serving on http://:::9283 2026-03-09T18:41:46.969 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:46 vm08 bash[36576]: [09/Mar/2026:18:41:46] ENGINE Bus STARTED 2026-03-09T18:41:47.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:46 vm08 bash[17774]: audit 2026-03-09T18:41:46.568950+0000 mon.b (mon.2) 136 : audit [DBG] from='mgr.? 192.168.123.108:0/2437113709' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:41:47.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:46 vm08 bash[17774]: cluster 2026-03-09T18:41:46.570447+0000 mon.a (mon.0) 1109 : cluster [DBG] Standby manager daemon x started 2026-03-09T18:41:47.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:46 vm08 bash[17774]: audit 2026-03-09T18:41:46.570902+0000 mon.b (mon.2) 137 : audit [DBG] from='mgr.? 192.168.123.108:0/2437113709' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:41:47.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:46 vm08 bash[17774]: audit 2026-03-09T18:41:46.571934+0000 mon.b (mon.2) 138 : audit [DBG] from='mgr.? 192.168.123.108:0/2437113709' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:41:47.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:46 vm00 bash[22468]: cluster 2026-03-09T18:41:45.969246+0000 mon.a (mon.0) 1108 : cluster [DBG] mgrmap e39: y(active, since 4s) 2026-03-09T18:41:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:46 vm00 bash[22468]: audit 2026-03-09T18:41:46.568375+0000 mon.b (mon.2) 135 : audit [DBG] from='mgr.? 192.168.123.108:0/2437113709' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:41:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:46 vm00 bash[22468]: audit 2026-03-09T18:41:46.568950+0000 mon.b (mon.2) 136 : audit [DBG] from='mgr.? 192.168.123.108:0/2437113709' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:41:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:46 vm00 bash[22468]: cluster 2026-03-09T18:41:46.570447+0000 mon.a (mon.0) 1109 : cluster [DBG] Standby manager daemon x started 2026-03-09T18:41:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:46 vm00 bash[22468]: audit 2026-03-09T18:41:46.570902+0000 mon.b (mon.2) 137 : audit [DBG] from='mgr.? 192.168.123.108:0/2437113709' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:41:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:46 vm00 bash[22468]: audit 2026-03-09T18:41:46.571934+0000 mon.b (mon.2) 138 : audit [DBG] from='mgr.? 192.168.123.108:0/2437113709' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:41:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:46 vm00 bash[17468]: cluster 2026-03-09T18:41:45.969246+0000 mon.a (mon.0) 1108 : cluster [DBG] mgrmap e39: y(active, since 4s) 2026-03-09T18:41:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:46 vm00 bash[17468]: audit 2026-03-09T18:41:46.568375+0000 mon.b (mon.2) 135 : audit [DBG] from='mgr.? 192.168.123.108:0/2437113709' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:41:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:46 vm00 bash[17468]: audit 2026-03-09T18:41:46.568950+0000 mon.b (mon.2) 136 : audit [DBG] from='mgr.? 192.168.123.108:0/2437113709' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:41:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:46 vm00 bash[17468]: cluster 2026-03-09T18:41:46.570447+0000 mon.a (mon.0) 1109 : cluster [DBG] Standby manager daemon x started 2026-03-09T18:41:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:46 vm00 bash[17468]: audit 2026-03-09T18:41:46.570902+0000 mon.b (mon.2) 137 : audit [DBG] from='mgr.? 192.168.123.108:0/2437113709' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:41:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:46 vm00 bash[17468]: audit 2026-03-09T18:41:46.571934+0000 mon.b (mon.2) 138 : audit [DBG] from='mgr.? 192.168.123.108:0/2437113709' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:41:48.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:47 vm00 bash[22468]: cluster 2026-03-09T18:41:46.938829+0000 mgr.y (mgr.24991) 9 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:41:48.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:47 vm00 bash[22468]: cluster 2026-03-09T18:41:46.992431+0000 mon.a (mon.0) 1110 : cluster [DBG] mgrmap e40: y(active, since 5s), standbys: x 2026-03-09T18:41:48.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:47 vm00 bash[22468]: audit 2026-03-09T18:41:46.993915+0000 mon.a (mon.0) 1111 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:41:48.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:47 vm00 bash[17468]: cluster 2026-03-09T18:41:46.938829+0000 mgr.y (mgr.24991) 9 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:41:48.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:47 vm00 bash[17468]: cluster 2026-03-09T18:41:46.992431+0000 mon.a (mon.0) 1110 : cluster [DBG] mgrmap e40: y(active, since 5s), standbys: x 2026-03-09T18:41:48.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:47 vm00 bash[17468]: audit 2026-03-09T18:41:46.993915+0000 mon.a (mon.0) 1111 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:41:48.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:47 vm08 bash[17774]: cluster 2026-03-09T18:41:46.938829+0000 mgr.y (mgr.24991) 9 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:41:48.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:47 vm08 bash[17774]: cluster 2026-03-09T18:41:46.992431+0000 mon.a (mon.0) 1110 : cluster [DBG] mgrmap e40: y(active, since 5s), standbys: x 2026-03-09T18:41:48.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:47 vm08 bash[17774]: audit 2026-03-09T18:41:46.993915+0000 mon.a (mon.0) 1111 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:41:49.296 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:49 vm08 bash[17774]: cluster 2026-03-09T18:41:48.015692+0000 mon.a (mon.0) 1112 : cluster [DBG] mgrmap e41: y(active, since 6s), standbys: x 2026-03-09T18:41:49.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:49 vm00 bash[22468]: cluster 2026-03-09T18:41:48.015692+0000 mon.a (mon.0) 1112 : cluster [DBG] mgrmap e41: y(active, since 6s), standbys: x 2026-03-09T18:41:49.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:49 vm00 bash[17468]: cluster 2026-03-09T18:41:48.015692+0000 mon.a (mon.0) 1112 : cluster [DBG] mgrmap e41: y(active, since 6s), standbys: x 2026-03-09T18:41:50.719 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:50 vm00 bash[22468]: cluster 2026-03-09T18:41:48.939177+0000 mgr.y (mgr.24991) 10 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:41:50.719 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:50 vm00 bash[22468]: audit 2026-03-09T18:41:49.439326+0000 mon.a (mon.0) 1113 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.719 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:50 vm00 bash[22468]: audit 2026-03-09T18:41:49.450032+0000 mon.a (mon.0) 1114 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:50 vm00 bash[22468]: audit 2026-03-09T18:41:49.510235+0000 mon.a (mon.0) 1115 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:50 vm00 bash[22468]: audit 2026-03-09T18:41:49.520597+0000 mon.a (mon.0) 1116 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:50 vm00 bash[22468]: audit 2026-03-09T18:41:50.189904+0000 mon.a (mon.0) 1117 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:50 vm00 bash[22468]: audit 2026-03-09T18:41:50.198570+0000 mon.a (mon.0) 1118 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:50 vm00 bash[22468]: audit 2026-03-09T18:41:50.202024+0000 mon.a (mon.0) 1119 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:50 vm00 bash[22468]: audit 2026-03-09T18:41:50.204228+0000 mon.a (mon.0) 1120 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:50 vm00 bash[22468]: audit 2026-03-09T18:41:50.212398+0000 mon.a (mon.0) 1121 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:50 vm00 bash[22468]: audit 2026-03-09T18:41:50.213306+0000 mon.a (mon.0) 1122 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:50 vm00 bash[22468]: audit 2026-03-09T18:41:50.214219+0000 mon.a (mon.0) 1123 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:50 vm00 bash[22468]: audit 2026-03-09T18:41:50.214745+0000 mon.a (mon.0) 1124 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:50 vm00 bash[22468]: audit 2026-03-09T18:41:50.374774+0000 mon.a (mon.0) 1125 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:50 vm00 bash[22468]: audit 2026-03-09T18:41:50.381571+0000 mon.a (mon.0) 1126 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:50 vm00 bash[22468]: audit 2026-03-09T18:41:50.388671+0000 mon.a (mon.0) 1127 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:50 vm00 bash[22468]: audit 2026-03-09T18:41:50.396862+0000 mon.a (mon.0) 1128 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:50 vm00 bash[22468]: audit 2026-03-09T18:41:50.403422+0000 mon.a (mon.0) 1129 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:50 vm00 bash[22468]: audit 2026-03-09T18:41:50.416310+0000 mon.a (mon.0) 1130 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:50 vm00 bash[22468]: audit 2026-03-09T18:41:50.419694+0000 mon.a (mon.0) 1131 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:50 vm00 bash[17468]: cluster 2026-03-09T18:41:48.939177+0000 mgr.y (mgr.24991) 10 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:50 vm00 bash[17468]: audit 2026-03-09T18:41:49.439326+0000 mon.a (mon.0) 1113 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:50 vm00 bash[17468]: audit 2026-03-09T18:41:49.450032+0000 mon.a (mon.0) 1114 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:50 vm00 bash[17468]: audit 2026-03-09T18:41:49.510235+0000 mon.a (mon.0) 1115 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:50 vm00 bash[17468]: audit 2026-03-09T18:41:49.520597+0000 mon.a (mon.0) 1116 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:50 vm00 bash[17468]: audit 2026-03-09T18:41:50.189904+0000 mon.a (mon.0) 1117 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:50 vm00 bash[17468]: audit 2026-03-09T18:41:50.198570+0000 mon.a (mon.0) 1118 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:50 vm00 bash[17468]: audit 2026-03-09T18:41:50.202024+0000 mon.a (mon.0) 1119 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:50 vm00 bash[17468]: audit 2026-03-09T18:41:50.204228+0000 mon.a (mon.0) 1120 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:50 vm00 bash[17468]: audit 2026-03-09T18:41:50.212398+0000 mon.a (mon.0) 1121 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:50 vm00 bash[17468]: audit 2026-03-09T18:41:50.213306+0000 mon.a (mon.0) 1122 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:50 vm00 bash[17468]: audit 2026-03-09T18:41:50.214219+0000 mon.a (mon.0) 1123 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:50 vm00 bash[17468]: audit 2026-03-09T18:41:50.214745+0000 mon.a (mon.0) 1124 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:50 vm00 bash[17468]: audit 2026-03-09T18:41:50.374774+0000 mon.a (mon.0) 1125 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:50 vm00 bash[17468]: audit 2026-03-09T18:41:50.381571+0000 mon.a (mon.0) 1126 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:50 vm00 bash[17468]: audit 2026-03-09T18:41:50.388671+0000 mon.a (mon.0) 1127 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:50 vm00 bash[17468]: audit 2026-03-09T18:41:50.396862+0000 mon.a (mon.0) 1128 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:50 vm00 bash[17468]: audit 2026-03-09T18:41:50.403422+0000 mon.a (mon.0) 1129 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:50 vm00 bash[17468]: audit 2026-03-09T18:41:50.416310+0000 mon.a (mon.0) 1130 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:41:50.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:50 vm00 bash[17468]: audit 2026-03-09T18:41:50.419694+0000 mon.a (mon.0) 1131 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:41:50.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:50 vm08 bash[17774]: cluster 2026-03-09T18:41:48.939177+0000 mgr.y (mgr.24991) 10 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:41:50.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:50 vm08 bash[17774]: audit 2026-03-09T18:41:49.439326+0000 mon.a (mon.0) 1113 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:50 vm08 bash[17774]: audit 2026-03-09T18:41:49.450032+0000 mon.a (mon.0) 1114 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:50 vm08 bash[17774]: audit 2026-03-09T18:41:49.510235+0000 mon.a (mon.0) 1115 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:50 vm08 bash[17774]: audit 2026-03-09T18:41:49.520597+0000 mon.a (mon.0) 1116 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:50 vm08 bash[17774]: audit 2026-03-09T18:41:50.189904+0000 mon.a (mon.0) 1117 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:50 vm08 bash[17774]: audit 2026-03-09T18:41:50.198570+0000 mon.a (mon.0) 1118 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:50 vm08 bash[17774]: audit 2026-03-09T18:41:50.202024+0000 mon.a (mon.0) 1119 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:41:50.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:50 vm08 bash[17774]: audit 2026-03-09T18:41:50.204228+0000 mon.a (mon.0) 1120 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:50 vm08 bash[17774]: audit 2026-03-09T18:41:50.212398+0000 mon.a (mon.0) 1121 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:50 vm08 bash[17774]: audit 2026-03-09T18:41:50.213306+0000 mon.a (mon.0) 1122 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:41:50.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:50 vm08 bash[17774]: audit 2026-03-09T18:41:50.214219+0000 mon.a (mon.0) 1123 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:41:50.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:50 vm08 bash[17774]: audit 2026-03-09T18:41:50.214745+0000 mon.a (mon.0) 1124 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:41:50.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:50 vm08 bash[17774]: audit 2026-03-09T18:41:50.374774+0000 mon.a (mon.0) 1125 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:50 vm08 bash[17774]: audit 2026-03-09T18:41:50.381571+0000 mon.a (mon.0) 1126 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:50 vm08 bash[17774]: audit 2026-03-09T18:41:50.388671+0000 mon.a (mon.0) 1127 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:50 vm08 bash[17774]: audit 2026-03-09T18:41:50.396862+0000 mon.a (mon.0) 1128 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:50 vm08 bash[17774]: audit 2026-03-09T18:41:50.403422+0000 mon.a (mon.0) 1129 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:50.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:50 vm08 bash[17774]: audit 2026-03-09T18:41:50.416310+0000 mon.a (mon.0) 1130 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:41:50.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:50 vm08 bash[17774]: audit 2026-03-09T18:41:50.419694+0000 mon.a (mon.0) 1131 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:41:51.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:51 vm00 bash[17468]: cephadm 2026-03-09T18:41:50.215507+0000 mgr.y (mgr.24991) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:41:51.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:51 vm00 bash[17468]: cephadm 2026-03-09T18:41:50.215652+0000 mgr.y (mgr.24991) 12 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T18:41:51.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:51 vm00 bash[17468]: cephadm 2026-03-09T18:41:50.250094+0000 mgr.y (mgr.24991) 13 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:41:51.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:51 vm00 bash[17468]: cephadm 2026-03-09T18:41:50.251517+0000 mgr.y (mgr.24991) 14 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:41:51.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:51 vm00 bash[17468]: cephadm 2026-03-09T18:41:50.286191+0000 mgr.y (mgr.24991) 15 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:41:51.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:51 vm00 bash[17468]: cephadm 2026-03-09T18:41:50.286396+0000 mgr.y (mgr.24991) 16 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:41:51.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:51 vm00 bash[17468]: cephadm 2026-03-09T18:41:50.330248+0000 mgr.y (mgr.24991) 17 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:41:51.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:51 vm00 bash[17468]: cephadm 2026-03-09T18:41:50.332082+0000 mgr.y (mgr.24991) 18 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:41:51.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:51 vm00 bash[17468]: cephadm 2026-03-09T18:41:50.415937+0000 mgr.y (mgr.24991) 19 : cephadm [INF] Reconfiguring iscsi.foo.vm00.ywhulq (dependencies changed)... 2026-03-09T18:41:51.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:51 vm00 bash[17468]: cephadm 2026-03-09T18:41:50.420428+0000 mgr.y (mgr.24991) 20 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm00.ywhulq on vm00 2026-03-09T18:41:51.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:51 vm00 bash[17468]: audit 2026-03-09T18:41:50.996410+0000 mon.a (mon.0) 1132 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:51.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:51 vm00 bash[17468]: audit 2026-03-09T18:41:51.005398+0000 mon.a (mon.0) 1133 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:51.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:51 vm00 bash[22468]: cephadm 2026-03-09T18:41:50.215507+0000 mgr.y (mgr.24991) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:41:51.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:51 vm00 bash[22468]: cephadm 2026-03-09T18:41:50.215652+0000 mgr.y (mgr.24991) 12 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T18:41:51.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:51 vm00 bash[22468]: cephadm 2026-03-09T18:41:50.250094+0000 mgr.y (mgr.24991) 13 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:41:51.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:51 vm00 bash[22468]: cephadm 2026-03-09T18:41:50.251517+0000 mgr.y (mgr.24991) 14 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:41:51.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:51 vm00 bash[22468]: cephadm 2026-03-09T18:41:50.286191+0000 mgr.y (mgr.24991) 15 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:41:51.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:51 vm00 bash[22468]: cephadm 2026-03-09T18:41:50.286396+0000 mgr.y (mgr.24991) 16 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:41:51.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:51 vm00 bash[22468]: cephadm 2026-03-09T18:41:50.330248+0000 mgr.y (mgr.24991) 17 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:41:51.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:51 vm00 bash[22468]: cephadm 2026-03-09T18:41:50.332082+0000 mgr.y (mgr.24991) 18 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:41:51.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:51 vm00 bash[22468]: cephadm 2026-03-09T18:41:50.415937+0000 mgr.y (mgr.24991) 19 : cephadm [INF] Reconfiguring iscsi.foo.vm00.ywhulq (dependencies changed)... 2026-03-09T18:41:51.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:51 vm00 bash[22468]: cephadm 2026-03-09T18:41:50.420428+0000 mgr.y (mgr.24991) 20 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm00.ywhulq on vm00 2026-03-09T18:41:51.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:51 vm00 bash[22468]: audit 2026-03-09T18:41:50.996410+0000 mon.a (mon.0) 1132 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:51.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:51 vm00 bash[22468]: audit 2026-03-09T18:41:51.005398+0000 mon.a (mon.0) 1133 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:51.709 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:51 vm08 bash[17774]: cephadm 2026-03-09T18:41:50.215507+0000 mgr.y (mgr.24991) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:41:51.709 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:51 vm08 bash[17774]: cephadm 2026-03-09T18:41:50.215652+0000 mgr.y (mgr.24991) 12 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T18:41:51.709 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:51 vm08 bash[17774]: cephadm 2026-03-09T18:41:50.250094+0000 mgr.y (mgr.24991) 13 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:41:51.709 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:51 vm08 bash[17774]: cephadm 2026-03-09T18:41:50.251517+0000 mgr.y (mgr.24991) 14 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:41:51.709 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:51 vm08 bash[17774]: cephadm 2026-03-09T18:41:50.286191+0000 mgr.y (mgr.24991) 15 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:41:51.709 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:51 vm08 bash[17774]: cephadm 2026-03-09T18:41:50.286396+0000 mgr.y (mgr.24991) 16 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:41:51.709 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:51 vm08 bash[17774]: cephadm 2026-03-09T18:41:50.330248+0000 mgr.y (mgr.24991) 17 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:41:51.710 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:51 vm08 bash[17774]: cephadm 2026-03-09T18:41:50.332082+0000 mgr.y (mgr.24991) 18 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:41:51.710 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:51 vm08 bash[17774]: cephadm 2026-03-09T18:41:50.415937+0000 mgr.y (mgr.24991) 19 : cephadm [INF] Reconfiguring iscsi.foo.vm00.ywhulq (dependencies changed)... 2026-03-09T18:41:51.710 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:51 vm08 bash[17774]: cephadm 2026-03-09T18:41:50.420428+0000 mgr.y (mgr.24991) 20 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm00.ywhulq on vm00 2026-03-09T18:41:51.710 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:51 vm08 bash[17774]: audit 2026-03-09T18:41:50.996410+0000 mon.a (mon.0) 1132 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:51.710 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:51 vm08 bash[17774]: audit 2026-03-09T18:41:51.005398+0000 mon.a (mon.0) 1133 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:51.965 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:51 vm08 systemd[1]: Stopping Ceph prometheus.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:41:51.965 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:51 vm08 bash[42014]: ts=2026-03-09T18:41:51.808Z caller=main.go:964 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-09T18:41:51.965 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:51 vm08 bash[42014]: ts=2026-03-09T18:41:51.809Z caller=main.go:988 level=info msg="Stopping scrape discovery manager..." 2026-03-09T18:41:51.965 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:51 vm08 bash[42014]: ts=2026-03-09T18:41:51.809Z caller=main.go:1002 level=info msg="Stopping notify discovery manager..." 2026-03-09T18:41:51.965 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:51 vm08 bash[42014]: ts=2026-03-09T18:41:51.809Z caller=manager.go:177 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-09T18:41:51.965 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:51 vm08 bash[42014]: ts=2026-03-09T18:41:51.809Z caller=main.go:984 level=info msg="Scrape discovery manager stopped" 2026-03-09T18:41:51.965 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:51 vm08 bash[42014]: ts=2026-03-09T18:41:51.809Z caller=main.go:998 level=info msg="Notify discovery manager stopped" 2026-03-09T18:41:51.965 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:51 vm08 bash[42014]: ts=2026-03-09T18:41:51.809Z caller=manager.go:187 level=info component="rule manager" msg="Rule manager stopped" 2026-03-09T18:41:51.965 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:51 vm08 bash[42014]: ts=2026-03-09T18:41:51.809Z caller=main.go:1039 level=info msg="Stopping scrape manager..." 2026-03-09T18:41:51.965 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:51 vm08 bash[42014]: ts=2026-03-09T18:41:51.809Z caller=main.go:1031 level=info msg="Scrape manager stopped" 2026-03-09T18:41:51.965 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:51 vm08 bash[42014]: ts=2026-03-09T18:41:51.812Z caller=notifier.go:618 level=info component=notifier msg="Stopping notification manager..." 2026-03-09T18:41:51.965 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:51 vm08 bash[42014]: ts=2026-03-09T18:41:51.812Z caller=main.go:1261 level=info msg="Notifier manager stopped" 2026-03-09T18:41:51.965 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:51 vm08 bash[42014]: ts=2026-03-09T18:41:51.812Z caller=main.go:1273 level=info msg="See you next time!" 2026-03-09T18:41:51.965 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:51 vm08 bash[43116]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-prometheus-a 2026-03-09T18:41:51.965 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:51 vm08 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@prometheus.a.service: Deactivated successfully. 2026-03-09T18:41:51.965 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:51 vm08 systemd[1]: Stopped Ceph prometheus.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:41:51.965 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:51 vm08 systemd[1]: Started Ceph prometheus.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:41:52.321 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:52 vm08 bash[43197]: ts=2026-03-09T18:41:52.020Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-09T18:41:52.321 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:52 vm08 bash[43197]: ts=2026-03-09T18:41:52.020Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-09T18:41:52.321 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:52 vm08 bash[43197]: ts=2026-03-09T18:41:52.020Z caller=main.go:623 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm08 (none))" 2026-03-09T18:41:52.321 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:52 vm08 bash[43197]: ts=2026-03-09T18:41:52.020Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-09T18:41:52.321 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:52 vm08 bash[43197]: ts=2026-03-09T18:41:52.020Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-09T18:41:52.321 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:52 vm08 bash[43197]: ts=2026-03-09T18:41:52.027Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-09T18:41:52.321 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:52 vm08 bash[43197]: ts=2026-03-09T18:41:52.034Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-09T18:41:52.321 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:52 vm08 bash[43197]: ts=2026-03-09T18:41:52.036Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-09T18:41:52.321 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:52 vm08 bash[43197]: ts=2026-03-09T18:41:52.036Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-09T18:41:52.321 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:52 vm08 bash[43197]: ts=2026-03-09T18:41:52.037Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-09T18:41:52.321 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:52 vm08 bash[43197]: ts=2026-03-09T18:41:52.037Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.502µs 2026-03-09T18:41:52.321 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:52 vm08 bash[43197]: ts=2026-03-09T18:41:52.037Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-09T18:41:52.321 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:52 vm08 bash[43197]: ts=2026-03-09T18:41:52.047Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=5 2026-03-09T18:41:52.321 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:52 vm08 bash[43197]: ts=2026-03-09T18:41:52.068Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=5 2026-03-09T18:41:52.321 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:52 vm08 bash[43197]: ts=2026-03-09T18:41:52.076Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=2 maxSegment=5 2026-03-09T18:41:52.321 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:52 vm08 bash[43197]: ts=2026-03-09T18:41:52.079Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=3 maxSegment=5 2026-03-09T18:41:52.321 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:52 vm08 bash[43197]: ts=2026-03-09T18:41:52.082Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=4 maxSegment=5 2026-03-09T18:41:52.321 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:52 vm08 bash[43197]: ts=2026-03-09T18:41:52.082Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=5 maxSegment=5 2026-03-09T18:41:52.321 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:52 vm08 bash[43197]: ts=2026-03-09T18:41:52.082Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=82.855µs wal_replay_duration=45.225345ms wbl_replay_duration=140ns total_replay_duration=45.51144ms 2026-03-09T18:41:52.321 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:52 vm08 bash[43197]: ts=2026-03-09T18:41:52.087Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-09T18:41:52.321 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:52 vm08 bash[43197]: ts=2026-03-09T18:41:52.088Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-09T18:41:52.321 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:52 vm08 bash[43197]: ts=2026-03-09T18:41:52.089Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-09T18:41:52.321 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:52 vm08 bash[43197]: ts=2026-03-09T18:41:52.102Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=12.508396ms db_storage=741ns remote_storage=1.243µs web_handler=512ns query_engine=671ns scrape=1.16499ms scrape_sd=108.584µs notify=9.318µs notify_sd=6.022µs rules=9.386211ms tracing=5.941µs 2026-03-09T18:41:52.321 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:52 vm08 bash[43197]: ts=2026-03-09T18:41:52.102Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-09T18:41:52.321 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:52 vm08 bash[43197]: ts=2026-03-09T18:41:52.103Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-09T18:41:52.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:52 vm08 bash[17774]: cluster 2026-03-09T18:41:50.939722+0000 mgr.y (mgr.24991) 21 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T18:41:52.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:52 vm08 bash[17774]: cephadm 2026-03-09T18:41:51.008311+0000 mgr.y (mgr.24991) 22 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T18:41:52.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:52 vm08 bash[17774]: cephadm 2026-03-09T18:41:51.238199+0000 mgr.y (mgr.24991) 23 : cephadm [INF] Reconfiguring daemon prometheus.a on vm08 2026-03-09T18:41:52.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:52 vm08 bash[17774]: audit 2026-03-09T18:41:51.563641+0000 mon.c (mon.1) 153 : audit [DBG] from='client.? 192.168.123.100:0/3445958367' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T18:41:52.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:52 vm08 bash[17774]: audit 2026-03-09T18:41:51.921697+0000 mon.a (mon.0) 1134 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:52.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:52 vm08 bash[17774]: audit 2026-03-09T18:41:51.928091+0000 mon.a (mon.0) 1135 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:52.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:52 vm08 bash[17774]: audit 2026-03-09T18:41:51.931344+0000 mon.a (mon.0) 1136 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:41:52.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:52 vm08 bash[17774]: audit 2026-03-09T18:41:51.943208+0000 mon.a (mon.0) 1137 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:52.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:52 vm08 bash[17774]: audit 2026-03-09T18:41:51.944702+0000 mon.a (mon.0) 1138 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:41:52.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:52 vm08 bash[17774]: audit 2026-03-09T18:41:51.945996+0000 mon.a (mon.0) 1139 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:41:52.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:52 vm08 bash[17774]: audit 2026-03-09T18:41:51.953354+0000 mon.a (mon.0) 1140 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:52.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:52 vm08 bash[17774]: audit 2026-03-09T18:41:51.958064+0000 mon.a (mon.0) 1141 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:41:52.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:52 vm08 bash[17774]: audit 2026-03-09T18:41:51.991979+0000 mon.a (mon.0) 1142 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:41:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:52 vm00 bash[22468]: cluster 2026-03-09T18:41:50.939722+0000 mgr.y (mgr.24991) 21 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T18:41:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:52 vm00 bash[22468]: cephadm 2026-03-09T18:41:51.008311+0000 mgr.y (mgr.24991) 22 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T18:41:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:52 vm00 bash[22468]: cephadm 2026-03-09T18:41:51.238199+0000 mgr.y (mgr.24991) 23 : cephadm [INF] Reconfiguring daemon prometheus.a on vm08 2026-03-09T18:41:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:52 vm00 bash[22468]: audit 2026-03-09T18:41:51.563641+0000 mon.c (mon.1) 153 : audit [DBG] from='client.? 192.168.123.100:0/3445958367' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T18:41:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:52 vm00 bash[22468]: audit 2026-03-09T18:41:51.921697+0000 mon.a (mon.0) 1134 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:52 vm00 bash[22468]: audit 2026-03-09T18:41:51.928091+0000 mon.a (mon.0) 1135 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:52 vm00 bash[22468]: audit 2026-03-09T18:41:51.931344+0000 mon.a (mon.0) 1136 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:41:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:52 vm00 bash[22468]: audit 2026-03-09T18:41:51.943208+0000 mon.a (mon.0) 1137 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:52 vm00 bash[22468]: audit 2026-03-09T18:41:51.944702+0000 mon.a (mon.0) 1138 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:41:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:52 vm00 bash[22468]: audit 2026-03-09T18:41:51.945996+0000 mon.a (mon.0) 1139 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:41:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:52 vm00 bash[22468]: audit 2026-03-09T18:41:51.953354+0000 mon.a (mon.0) 1140 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:52 vm00 bash[22468]: audit 2026-03-09T18:41:51.958064+0000 mon.a (mon.0) 1141 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:41:52.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:52 vm00 bash[22468]: audit 2026-03-09T18:41:51.991979+0000 mon.a (mon.0) 1142 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:41:52.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:52 vm00 bash[17468]: cluster 2026-03-09T18:41:50.939722+0000 mgr.y (mgr.24991) 21 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T18:41:52.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:52 vm00 bash[17468]: cephadm 2026-03-09T18:41:51.008311+0000 mgr.y (mgr.24991) 22 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T18:41:52.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:52 vm00 bash[17468]: cephadm 2026-03-09T18:41:51.238199+0000 mgr.y (mgr.24991) 23 : cephadm [INF] Reconfiguring daemon prometheus.a on vm08 2026-03-09T18:41:52.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:52 vm00 bash[17468]: audit 2026-03-09T18:41:51.563641+0000 mon.c (mon.1) 153 : audit [DBG] from='client.? 192.168.123.100:0/3445958367' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T18:41:52.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:52 vm00 bash[17468]: audit 2026-03-09T18:41:51.921697+0000 mon.a (mon.0) 1134 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:52.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:52 vm00 bash[17468]: audit 2026-03-09T18:41:51.928091+0000 mon.a (mon.0) 1135 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:52.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:52 vm00 bash[17468]: audit 2026-03-09T18:41:51.931344+0000 mon.a (mon.0) 1136 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:41:52.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:52 vm00 bash[17468]: audit 2026-03-09T18:41:51.943208+0000 mon.a (mon.0) 1137 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:52.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:52 vm00 bash[17468]: audit 2026-03-09T18:41:51.944702+0000 mon.a (mon.0) 1138 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:41:52.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:52 vm00 bash[17468]: audit 2026-03-09T18:41:51.945996+0000 mon.a (mon.0) 1139 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:41:52.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:52 vm00 bash[17468]: audit 2026-03-09T18:41:51.953354+0000 mon.a (mon.0) 1140 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:52.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:52 vm00 bash[17468]: audit 2026-03-09T18:41:51.958064+0000 mon.a (mon.0) 1141 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:41:52.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:52 vm00 bash[17468]: audit 2026-03-09T18:41:51.991979+0000 mon.a (mon.0) 1142 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:41:53.034 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:41:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:41:53.034 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:41:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:41:53.034 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:41:53.396 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:41:53.396 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:41:53.396 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:53 vm08 systemd[1]: Stopping Ceph mgr.x for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:41:53.396 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:53 vm08 bash[43469]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-mgr-x 2026-03-09T18:41:53.396 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:53 vm08 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mgr.x.service: Main process exited, code=exited, status=143/n/a 2026-03-09T18:41:53.396 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:53 vm08 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mgr.x.service: Failed with result 'exit-code'. 2026-03-09T18:41:53.396 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:53 vm08 systemd[1]: Stopped Ceph mgr.x for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:41:53.396 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:41:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:41:53.397 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:41:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:41:53.397 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:41:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:41:53.397 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:41:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:41:53.473 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:41:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:41:53.473 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:41:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:41:53.473 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:41:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:41:53.473 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:41:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:41:53.473 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:41:53.473 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:53 vm08 bash[17774]: audit 2026-03-09T18:41:51.931846+0000 mgr.y (mgr.24991) 24 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:41:53.473 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:53 vm08 bash[17774]: cephadm 2026-03-09T18:41:51.944496+0000 mgr.y (mgr.24991) 25 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.100:5000 to Dashboard 2026-03-09T18:41:53.473 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:53 vm08 bash[17774]: audit 2026-03-09T18:41:51.944994+0000 mgr.y (mgr.24991) 26 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:41:53.473 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:41:53.473 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:41:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:41:53.473 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:41:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:41:53.474 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:41:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:41:53.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:53 vm08 systemd[1]: Started Ceph mgr.x for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:41:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:53 vm08 bash[17774]: audit 2026-03-09T18:41:51.946284+0000 mgr.y (mgr.24991) 27 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:41:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:53 vm08 bash[17774]: audit 2026-03-09T18:41:51.958553+0000 mgr.y (mgr.24991) 28 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:41:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:53 vm08 bash[17774]: cephadm 2026-03-09T18:41:52.477881+0000 mgr.y (mgr.24991) 29 : cephadm [INF] Upgrade: Updating mgr.x 2026-03-09T18:41:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:53 vm08 bash[17774]: audit 2026-03-09T18:41:52.478365+0000 mon.a (mon.0) 1143 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:41:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:53 vm08 bash[17774]: audit 2026-03-09T18:41:52.479464+0000 mon.a (mon.0) 1144 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:41:53.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:53 vm08 bash[17774]: audit 2026-03-09T18:41:52.480241+0000 mon.a (mon.0) 1145 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:41:53.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:53 vm08 bash[17774]: cephadm 2026-03-09T18:41:52.480915+0000 mgr.y (mgr.24991) 30 : cephadm [INF] Deploying daemon mgr.x on vm08 2026-03-09T18:41:53.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:53 vm00 bash[17468]: audit 2026-03-09T18:41:51.931846+0000 mgr.y (mgr.24991) 24 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:41:53.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:53 vm00 bash[17468]: cephadm 2026-03-09T18:41:51.944496+0000 mgr.y (mgr.24991) 25 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.100:5000 to Dashboard 2026-03-09T18:41:53.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:53 vm00 bash[17468]: audit 2026-03-09T18:41:51.944994+0000 mgr.y (mgr.24991) 26 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:41:53.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:53 vm00 bash[17468]: audit 2026-03-09T18:41:51.946284+0000 mgr.y (mgr.24991) 27 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:41:53.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:53 vm00 bash[17468]: audit 2026-03-09T18:41:51.958553+0000 mgr.y (mgr.24991) 28 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:41:53.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:53 vm00 bash[17468]: cephadm 2026-03-09T18:41:52.477881+0000 mgr.y (mgr.24991) 29 : cephadm [INF] Upgrade: Updating mgr.x 2026-03-09T18:41:53.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:53 vm00 bash[17468]: audit 2026-03-09T18:41:52.478365+0000 mon.a (mon.0) 1143 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:41:53.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:53 vm00 bash[17468]: audit 2026-03-09T18:41:52.479464+0000 mon.a (mon.0) 1144 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:41:53.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:53 vm00 bash[17468]: audit 2026-03-09T18:41:52.480241+0000 mon.a (mon.0) 1145 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:41:53.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:53 vm00 bash[17468]: cephadm 2026-03-09T18:41:52.480915+0000 mgr.y (mgr.24991) 30 : cephadm [INF] Deploying daemon mgr.x on vm08 2026-03-09T18:41:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:53 vm00 bash[22468]: audit 2026-03-09T18:41:51.931846+0000 mgr.y (mgr.24991) 24 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:41:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:53 vm00 bash[22468]: cephadm 2026-03-09T18:41:51.944496+0000 mgr.y (mgr.24991) 25 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.100:5000 to Dashboard 2026-03-09T18:41:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:53 vm00 bash[22468]: audit 2026-03-09T18:41:51.944994+0000 mgr.y (mgr.24991) 26 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:41:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:53 vm00 bash[22468]: audit 2026-03-09T18:41:51.946284+0000 mgr.y (mgr.24991) 27 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:41:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:53 vm00 bash[22468]: audit 2026-03-09T18:41:51.958553+0000 mgr.y (mgr.24991) 28 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T18:41:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:53 vm00 bash[22468]: cephadm 2026-03-09T18:41:52.477881+0000 mgr.y (mgr.24991) 29 : cephadm [INF] Upgrade: Updating mgr.x 2026-03-09T18:41:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:53 vm00 bash[22468]: audit 2026-03-09T18:41:52.478365+0000 mon.a (mon.0) 1143 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:41:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:53 vm00 bash[22468]: audit 2026-03-09T18:41:52.479464+0000 mon.a (mon.0) 1144 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:41:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:53 vm00 bash[22468]: audit 2026-03-09T18:41:52.480241+0000 mon.a (mon.0) 1145 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:41:53.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:53 vm00 bash[22468]: cephadm 2026-03-09T18:41:52.480915+0000 mgr.y (mgr.24991) 30 : cephadm [INF] Deploying daemon mgr.x on vm08 2026-03-09T18:41:54.224 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:53 vm08 bash[43582]: debug 2026-03-09T18:41:53.751+0000 7fe23f565140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T18:41:54.225 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:53 vm08 bash[43582]: debug 2026-03-09T18:41:53.795+0000 7fe23f565140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T18:41:54.225 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:53 vm08 bash[43582]: debug 2026-03-09T18:41:53.943+0000 7fe23f565140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T18:41:54.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:54 vm08 bash[43582]: debug 2026-03-09T18:41:54.275+0000 7fe23f565140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T18:41:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:54 vm08 bash[17774]: cluster 2026-03-09T18:41:52.939989+0000 mgr.y (mgr.24991) 31 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T18:41:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:54 vm08 bash[17774]: audit 2026-03-09T18:41:53.529086+0000 mon.a (mon.0) 1146 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:54 vm08 bash[17774]: audit 2026-03-09T18:41:53.536585+0000 mon.a (mon.0) 1147 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:54 vm08 bash[17774]: audit 2026-03-09T18:41:53.537313+0000 mon.a (mon.0) 1148 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:41:54.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:54 vm00 bash[22468]: cluster 2026-03-09T18:41:52.939989+0000 mgr.y (mgr.24991) 31 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T18:41:54.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:54 vm00 bash[22468]: audit 2026-03-09T18:41:53.529086+0000 mon.a (mon.0) 1146 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:54.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:54 vm00 bash[22468]: audit 2026-03-09T18:41:53.536585+0000 mon.a (mon.0) 1147 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:54.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:54 vm00 bash[22468]: audit 2026-03-09T18:41:53.537313+0000 mon.a (mon.0) 1148 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:41:54.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:54 vm00 bash[17468]: cluster 2026-03-09T18:41:52.939989+0000 mgr.y (mgr.24991) 31 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T18:41:54.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:54 vm00 bash[17468]: audit 2026-03-09T18:41:53.529086+0000 mon.a (mon.0) 1146 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:54.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:54 vm00 bash[17468]: audit 2026-03-09T18:41:53.536585+0000 mon.a (mon.0) 1147 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:41:54.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:54 vm00 bash[17468]: audit 2026-03-09T18:41:53.537313+0000 mon.a (mon.0) 1148 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:41:55.153 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:54 vm08 bash[43582]: debug 2026-03-09T18:41:54.767+0000 7fe23f565140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T18:41:55.153 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:54 vm08 bash[43582]: debug 2026-03-09T18:41:54.863+0000 7fe23f565140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T18:41:55.153 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:54 vm08 bash[43582]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T18:41:55.153 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:54 vm08 bash[43582]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T18:41:55.153 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:54 vm08 bash[43582]: from numpy import show_config as show_numpy_config 2026-03-09T18:41:55.153 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:55 vm08 bash[43582]: debug 2026-03-09T18:41:54.999+0000 7fe23f565140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T18:41:55.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:55 vm08 bash[43582]: debug 2026-03-09T18:41:55.151+0000 7fe23f565140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T18:41:55.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:55 vm08 bash[43582]: debug 2026-03-09T18:41:55.195+0000 7fe23f565140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T18:41:55.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:55 vm08 bash[43582]: debug 2026-03-09T18:41:55.239+0000 7fe23f565140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T18:41:55.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:55 vm08 bash[43582]: debug 2026-03-09T18:41:55.283+0000 7fe23f565140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T18:41:55.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:55 vm08 bash[43582]: debug 2026-03-09T18:41:55.343+0000 7fe23f565140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T18:41:56.115 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:55 vm08 bash[43582]: debug 2026-03-09T18:41:55.827+0000 7fe23f565140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T18:41:56.115 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:55 vm08 bash[43582]: debug 2026-03-09T18:41:55.871+0000 7fe23f565140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T18:41:56.115 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:55 vm08 bash[43582]: debug 2026-03-09T18:41:55.911+0000 7fe23f565140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T18:41:56.115 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:56 vm08 bash[43582]: debug 2026-03-09T18:41:56.067+0000 7fe23f565140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T18:41:56.457 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:56 vm08 bash[43582]: debug 2026-03-09T18:41:56.111+0000 7fe23f565140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T18:41:56.457 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:56 vm08 bash[43582]: debug 2026-03-09T18:41:56.155+0000 7fe23f565140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T18:41:56.457 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:56 vm08 bash[43582]: debug 2026-03-09T18:41:56.275+0000 7fe23f565140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:41:56.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:56 vm08 bash[43582]: debug 2026-03-09T18:41:56.455+0000 7fe23f565140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T18:41:56.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:56 vm08 bash[43582]: debug 2026-03-09T18:41:56.643+0000 7fe23f565140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T18:41:56.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:56 vm08 bash[43582]: debug 2026-03-09T18:41:56.683+0000 7fe23f565140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T18:41:56.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:56 vm08 bash[17774]: cluster 2026-03-09T18:41:54.940493+0000 mgr.y (mgr.24991) 32 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:41:56.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:56 vm00 bash[22468]: cluster 2026-03-09T18:41:54.940493+0000 mgr.y (mgr.24991) 32 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:41:56.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:56 vm00 bash[17468]: cluster 2026-03-09T18:41:54.940493+0000 mgr.y (mgr.24991) 32 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:41:57.172 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:56 vm08 bash[43582]: debug 2026-03-09T18:41:56.731+0000 7fe23f565140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T18:41:57.172 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:56 vm08 bash[43582]: debug 2026-03-09T18:41:56.903+0000 7fe23f565140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:41:57.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:57 vm08 bash[43582]: debug 2026-03-09T18:41:57.171+0000 7fe23f565140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T18:41:57.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:57 vm08 bash[43582]: [09/Mar/2026:18:41:57] ENGINE Bus STARTING 2026-03-09T18:41:57.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:57 vm08 bash[43582]: CherryPy Checker: 2026-03-09T18:41:57.474 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:57 vm08 bash[43582]: The Application mounted at '' has an empty config. 2026-03-09T18:41:57.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:57 vm08 bash[43582]: [09/Mar/2026:18:41:57] ENGINE Serving on http://:::9283 2026-03-09T18:41:57.475 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:41:57 vm08 bash[43582]: [09/Mar/2026:18:41:57] ENGINE Bus STARTED 2026-03-09T18:41:57.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:57 vm08 bash[17774]: audit 2026-03-09T18:41:57.179799+0000 mon.b (mon.2) 139 : audit [DBG] from='mgr.? 192.168.123.108:0/3330635409' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:41:57.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:57 vm08 bash[17774]: cluster 2026-03-09T18:41:57.179839+0000 mon.a (mon.0) 1149 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T18:41:57.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:57 vm08 bash[17774]: cluster 2026-03-09T18:41:57.179996+0000 mon.a (mon.0) 1150 : cluster [DBG] Standby manager daemon x started 2026-03-09T18:41:57.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:57 vm08 bash[17774]: audit 2026-03-09T18:41:57.180773+0000 mon.b (mon.2) 140 : audit [DBG] from='mgr.? 192.168.123.108:0/3330635409' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:41:57.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:57 vm08 bash[17774]: audit 2026-03-09T18:41:57.181682+0000 mon.b (mon.2) 141 : audit [DBG] from='mgr.? 192.168.123.108:0/3330635409' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:41:57.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:57 vm08 bash[17774]: audit 2026-03-09T18:41:57.182204+0000 mon.b (mon.2) 142 : audit [DBG] from='mgr.? 192.168.123.108:0/3330635409' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:41:58.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:57 vm00 bash[22468]: audit 2026-03-09T18:41:57.179799+0000 mon.b (mon.2) 139 : audit [DBG] from='mgr.? 192.168.123.108:0/3330635409' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:41:58.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:57 vm00 bash[22468]: cluster 2026-03-09T18:41:57.179839+0000 mon.a (mon.0) 1149 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T18:41:58.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:57 vm00 bash[22468]: cluster 2026-03-09T18:41:57.179996+0000 mon.a (mon.0) 1150 : cluster [DBG] Standby manager daemon x started 2026-03-09T18:41:58.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:57 vm00 bash[22468]: audit 2026-03-09T18:41:57.180773+0000 mon.b (mon.2) 140 : audit [DBG] from='mgr.? 192.168.123.108:0/3330635409' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:41:58.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:57 vm00 bash[22468]: audit 2026-03-09T18:41:57.181682+0000 mon.b (mon.2) 141 : audit [DBG] from='mgr.? 192.168.123.108:0/3330635409' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:41:58.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:57 vm00 bash[22468]: audit 2026-03-09T18:41:57.182204+0000 mon.b (mon.2) 142 : audit [DBG] from='mgr.? 192.168.123.108:0/3330635409' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:41:58.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:57 vm00 bash[17468]: audit 2026-03-09T18:41:57.179799+0000 mon.b (mon.2) 139 : audit [DBG] from='mgr.? 192.168.123.108:0/3330635409' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:41:58.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:57 vm00 bash[17468]: cluster 2026-03-09T18:41:57.179839+0000 mon.a (mon.0) 1149 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T18:41:58.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:57 vm00 bash[17468]: cluster 2026-03-09T18:41:57.179996+0000 mon.a (mon.0) 1150 : cluster [DBG] Standby manager daemon x started 2026-03-09T18:41:58.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:57 vm00 bash[17468]: audit 2026-03-09T18:41:57.180773+0000 mon.b (mon.2) 140 : audit [DBG] from='mgr.? 192.168.123.108:0/3330635409' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:41:58.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:57 vm00 bash[17468]: audit 2026-03-09T18:41:57.181682+0000 mon.b (mon.2) 141 : audit [DBG] from='mgr.? 192.168.123.108:0/3330635409' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:41:58.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:57 vm00 bash[17468]: audit 2026-03-09T18:41:57.182204+0000 mon.b (mon.2) 142 : audit [DBG] from='mgr.? 192.168.123.108:0/3330635409' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:41:59.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:58 vm00 bash[22468]: cluster 2026-03-09T18:41:56.940839+0000 mgr.y (mgr.24991) 33 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:41:59.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:58 vm00 bash[22468]: cluster 2026-03-09T18:41:57.698227+0000 mon.a (mon.0) 1151 : cluster [DBG] mgrmap e42: y(active, since 15s), standbys: x 2026-03-09T18:41:59.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:58 vm00 bash[22468]: audit 2026-03-09T18:41:58.329844+0000 mon.a (mon.0) 1152 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:41:59.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:58 vm00 bash[17468]: cluster 2026-03-09T18:41:56.940839+0000 mgr.y (mgr.24991) 33 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:41:59.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:58 vm00 bash[17468]: cluster 2026-03-09T18:41:57.698227+0000 mon.a (mon.0) 1151 : cluster [DBG] mgrmap e42: y(active, since 15s), standbys: x 2026-03-09T18:41:59.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:58 vm00 bash[17468]: audit 2026-03-09T18:41:58.329844+0000 mon.a (mon.0) 1152 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:41:59.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:58 vm08 bash[17774]: cluster 2026-03-09T18:41:56.940839+0000 mgr.y (mgr.24991) 33 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:41:59.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:58 vm08 bash[17774]: cluster 2026-03-09T18:41:57.698227+0000 mon.a (mon.0) 1151 : cluster [DBG] mgrmap e42: y(active, since 15s), standbys: x 2026-03-09T18:41:59.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:58 vm08 bash[17774]: audit 2026-03-09T18:41:58.329844+0000 mon.a (mon.0) 1152 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:41:59.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:41:59 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:41:59] "GET /metrics HTTP/1.1" 200 37546 "" "Prometheus/2.51.0" 2026-03-09T18:42:00.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:59 vm08 bash[17774]: cluster 2026-03-09T18:41:58.941214+0000 mgr.y (mgr.24991) 34 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:42:00.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:59 vm08 bash[17774]: audit 2026-03-09T18:41:58.950282+0000 mon.a (mon.0) 1153 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:00.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:59 vm08 bash[17774]: audit 2026-03-09T18:41:58.958241+0000 mon.a (mon.0) 1154 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:00.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:59 vm08 bash[17774]: audit 2026-03-09T18:41:59.084192+0000 mon.a (mon.0) 1155 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:00.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:59 vm08 bash[17774]: audit 2026-03-09T18:41:59.092812+0000 mon.a (mon.0) 1156 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:00.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:59 vm08 bash[17774]: audit 2026-03-09T18:41:59.703615+0000 mon.a (mon.0) 1157 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:00.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:41:59 vm08 bash[17774]: audit 2026-03-09T18:41:59.715932+0000 mon.a (mon.0) 1158 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:00.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:59 vm00 bash[22468]: cluster 2026-03-09T18:41:58.941214+0000 mgr.y (mgr.24991) 34 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:42:00.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:59 vm00 bash[22468]: audit 2026-03-09T18:41:58.950282+0000 mon.a (mon.0) 1153 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:00.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:59 vm00 bash[22468]: audit 2026-03-09T18:41:58.958241+0000 mon.a (mon.0) 1154 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:00.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:59 vm00 bash[22468]: audit 2026-03-09T18:41:59.084192+0000 mon.a (mon.0) 1155 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:00.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:59 vm00 bash[22468]: audit 2026-03-09T18:41:59.092812+0000 mon.a (mon.0) 1156 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:00.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:59 vm00 bash[22468]: audit 2026-03-09T18:41:59.703615+0000 mon.a (mon.0) 1157 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:00.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:41:59 vm00 bash[22468]: audit 2026-03-09T18:41:59.715932+0000 mon.a (mon.0) 1158 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:00.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:59 vm00 bash[17468]: cluster 2026-03-09T18:41:58.941214+0000 mgr.y (mgr.24991) 34 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:42:00.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:59 vm00 bash[17468]: audit 2026-03-09T18:41:58.950282+0000 mon.a (mon.0) 1153 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:00.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:59 vm00 bash[17468]: audit 2026-03-09T18:41:58.958241+0000 mon.a (mon.0) 1154 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:00.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:59 vm00 bash[17468]: audit 2026-03-09T18:41:59.084192+0000 mon.a (mon.0) 1155 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:00.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:59 vm00 bash[17468]: audit 2026-03-09T18:41:59.092812+0000 mon.a (mon.0) 1156 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:00.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:59 vm00 bash[17468]: audit 2026-03-09T18:41:59.703615+0000 mon.a (mon.0) 1157 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:00.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:41:59 vm00 bash[17468]: audit 2026-03-09T18:41:59.715932+0000 mon.a (mon.0) 1158 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:02.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:02 vm08 bash[17774]: cluster 2026-03-09T18:42:00.941762+0000 mgr.y (mgr.24991) 35 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:42:02.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:02 vm08 bash[17774]: audit 2026-03-09T18:42:01.367112+0000 mgr.y (mgr.24991) 36 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:42:03.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:02 vm00 bash[22468]: cluster 2026-03-09T18:42:00.941762+0000 mgr.y (mgr.24991) 35 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:42:03.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:02 vm00 bash[22468]: audit 2026-03-09T18:42:01.367112+0000 mgr.y (mgr.24991) 36 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:42:03.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:02 vm00 bash[17468]: cluster 2026-03-09T18:42:00.941762+0000 mgr.y (mgr.24991) 35 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:42:03.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:02 vm00 bash[17468]: audit 2026-03-09T18:42:01.367112+0000 mgr.y (mgr.24991) 36 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:42:04.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:04 vm08 bash[17774]: cluster 2026-03-09T18:42:02.942060+0000 mgr.y (mgr.24991) 37 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 1 op/s 2026-03-09T18:42:05.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:04 vm00 bash[22468]: cluster 2026-03-09T18:42:02.942060+0000 mgr.y (mgr.24991) 37 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 1 op/s 2026-03-09T18:42:05.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:04 vm00 bash[17468]: cluster 2026-03-09T18:42:02.942060+0000 mgr.y (mgr.24991) 37 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 1 op/s 2026-03-09T18:42:06.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: cluster 2026-03-09T18:42:04.942723+0000 mgr.y (mgr.24991) 38 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:42:06.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.512040+0000 mon.a (mon.0) 1159 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.517013+0000 mon.a (mon.0) 1160 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.517783+0000 mon.a (mon.0) 1161 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:42:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.518326+0000 mon.a (mon.0) 1162 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:42:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.522627+0000 mon.a (mon.0) 1163 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.568217+0000 mon.a (mon.0) 1164 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:42:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.569565+0000 mon.a (mon.0) 1165 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.575987+0000 mon.a (mon.0) 1166 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.577547+0000 mon.a (mon.0) 1167 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]: dispatch 2026-03-09T18:42:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.583999+0000 mon.a (mon.0) 1168 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]': finished 2026-03-09T18:42:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.584920+0000 mon.a (mon.0) 1169 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]: dispatch 2026-03-09T18:42:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.590185+0000 mon.a (mon.0) 1170 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]': finished 2026-03-09T18:42:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.591356+0000 mon.a (mon.0) 1171 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.596569+0000 mon.a (mon.0) 1172 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.597773+0000 mon.a (mon.0) 1173 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.603209+0000 mon.a (mon.0) 1174 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.604276+0000 mon.a (mon.0) 1175 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.609581+0000 mon.a (mon.0) 1176 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.610505+0000 mon.a (mon.0) 1177 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.615638+0000 mon.a (mon.0) 1178 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.616585+0000 mon.a (mon.0) 1179 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.620948+0000 mon.a (mon.0) 1180 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.621896+0000 mon.a (mon.0) 1181 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.628383+0000 mon.a (mon.0) 1182 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.629593+0000 mon.a (mon.0) 1183 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.635231+0000 mon.a (mon.0) 1184 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.636094+0000 mon.a (mon.0) 1185 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.637373+0000 mon.a (mon.0) 1186 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:06.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:06 vm08 bash[17774]: audit 2026-03-09T18:42:06.638612+0000 mon.a (mon.0) 1187 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: cluster 2026-03-09T18:42:04.942723+0000 mgr.y (mgr.24991) 38 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:42:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.512040+0000 mon.a (mon.0) 1159 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.517013+0000 mon.a (mon.0) 1160 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.517783+0000 mon.a (mon.0) 1161 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:42:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.518326+0000 mon.a (mon.0) 1162 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:42:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.522627+0000 mon.a (mon.0) 1163 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.568217+0000 mon.a (mon.0) 1164 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:42:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.569565+0000 mon.a (mon.0) 1165 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.575987+0000 mon.a (mon.0) 1166 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.577547+0000 mon.a (mon.0) 1167 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]: dispatch 2026-03-09T18:42:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.583999+0000 mon.a (mon.0) 1168 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]': finished 2026-03-09T18:42:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.584920+0000 mon.a (mon.0) 1169 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]: dispatch 2026-03-09T18:42:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.590185+0000 mon.a (mon.0) 1170 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]': finished 2026-03-09T18:42:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.591356+0000 mon.a (mon.0) 1171 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.596569+0000 mon.a (mon.0) 1172 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.597773+0000 mon.a (mon.0) 1173 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.603209+0000 mon.a (mon.0) 1174 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.604276+0000 mon.a (mon.0) 1175 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.609581+0000 mon.a (mon.0) 1176 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.610505+0000 mon.a (mon.0) 1177 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: cluster 2026-03-09T18:42:04.942723+0000 mgr.y (mgr.24991) 38 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.512040+0000 mon.a (mon.0) 1159 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.517013+0000 mon.a (mon.0) 1160 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.517783+0000 mon.a (mon.0) 1161 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.518326+0000 mon.a (mon.0) 1162 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.522627+0000 mon.a (mon.0) 1163 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.568217+0000 mon.a (mon.0) 1164 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.569565+0000 mon.a (mon.0) 1165 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.575987+0000 mon.a (mon.0) 1166 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.577547+0000 mon.a (mon.0) 1167 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]: dispatch 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.583999+0000 mon.a (mon.0) 1168 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]': finished 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.584920+0000 mon.a (mon.0) 1169 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]: dispatch 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.590185+0000 mon.a (mon.0) 1170 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]': finished 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.591356+0000 mon.a (mon.0) 1171 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.596569+0000 mon.a (mon.0) 1172 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.597773+0000 mon.a (mon.0) 1173 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.603209+0000 mon.a (mon.0) 1174 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.604276+0000 mon.a (mon.0) 1175 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.609581+0000 mon.a (mon.0) 1176 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.610505+0000 mon.a (mon.0) 1177 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.615638+0000 mon.a (mon.0) 1178 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.616585+0000 mon.a (mon.0) 1179 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.620948+0000 mon.a (mon.0) 1180 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.621896+0000 mon.a (mon.0) 1181 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.628383+0000 mon.a (mon.0) 1182 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.629593+0000 mon.a (mon.0) 1183 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.635231+0000 mon.a (mon.0) 1184 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.636094+0000 mon.a (mon.0) 1185 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.637373+0000 mon.a (mon.0) 1186 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:06 vm00 bash[22468]: audit 2026-03-09T18:42:06.638612+0000 mon.a (mon.0) 1187 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.615638+0000 mon.a (mon.0) 1178 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.616585+0000 mon.a (mon.0) 1179 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.620948+0000 mon.a (mon.0) 1180 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.621896+0000 mon.a (mon.0) 1181 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.628383+0000 mon.a (mon.0) 1182 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.629593+0000 mon.a (mon.0) 1183 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.635231+0000 mon.a (mon.0) 1184 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:07.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.636094+0000 mon.a (mon.0) 1185 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:07.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.637373+0000 mon.a (mon.0) 1186 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:07.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:06 vm00 bash[17468]: audit 2026-03-09T18:42:06.638612+0000 mon.a (mon.0) 1187 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:07.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:07 vm08 bash[17774]: cephadm 2026-03-09T18:42:06.570135+0000 mgr.y (mgr.24991) 39 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:42:07.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:07 vm08 bash[17774]: cephadm 2026-03-09T18:42:06.591800+0000 mgr.y (mgr.24991) 40 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:42:07.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:07 vm08 bash[17774]: cephadm 2026-03-09T18:42:06.598176+0000 mgr.y (mgr.24991) 41 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:42:07.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:07 vm08 bash[17774]: cephadm 2026-03-09T18:42:06.604700+0000 mgr.y (mgr.24991) 42 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:42:07.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:07 vm08 bash[17774]: cephadm 2026-03-09T18:42:06.610925+0000 mgr.y (mgr.24991) 43 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-09T18:42:07.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:07 vm08 bash[17774]: cephadm 2026-03-09T18:42:06.616995+0000 mgr.y (mgr.24991) 44 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:42:07.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:07 vm08 bash[17774]: cephadm 2026-03-09T18:42:06.622377+0000 mgr.y (mgr.24991) 45 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:42:07.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:07 vm08 bash[17774]: cephadm 2026-03-09T18:42:06.630078+0000 mgr.y (mgr.24991) 46 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:42:07.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:07 vm08 bash[17774]: cephadm 2026-03-09T18:42:06.636530+0000 mgr.y (mgr.24991) 47 : cephadm [INF] Upgrade: Setting container_image for all node-exporter 2026-03-09T18:42:07.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:07 vm08 bash[17774]: cephadm 2026-03-09T18:42:06.637800+0000 mgr.y (mgr.24991) 48 : cephadm [INF] Upgrade: Setting container_image for all prometheus 2026-03-09T18:42:07.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:07 vm08 bash[17774]: cephadm 2026-03-09T18:42:06.639044+0000 mgr.y (mgr.24991) 49 : cephadm [INF] Upgrade: Setting container_image for all alertmanager 2026-03-09T18:42:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:07 vm00 bash[22468]: cephadm 2026-03-09T18:42:06.570135+0000 mgr.y (mgr.24991) 39 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:42:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:07 vm00 bash[22468]: cephadm 2026-03-09T18:42:06.591800+0000 mgr.y (mgr.24991) 40 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:42:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:07 vm00 bash[22468]: cephadm 2026-03-09T18:42:06.598176+0000 mgr.y (mgr.24991) 41 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:42:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:07 vm00 bash[22468]: cephadm 2026-03-09T18:42:06.604700+0000 mgr.y (mgr.24991) 42 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:42:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:07 vm00 bash[22468]: cephadm 2026-03-09T18:42:06.610925+0000 mgr.y (mgr.24991) 43 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-09T18:42:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:07 vm00 bash[22468]: cephadm 2026-03-09T18:42:06.616995+0000 mgr.y (mgr.24991) 44 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:42:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:07 vm00 bash[22468]: cephadm 2026-03-09T18:42:06.622377+0000 mgr.y (mgr.24991) 45 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:42:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:07 vm00 bash[22468]: cephadm 2026-03-09T18:42:06.630078+0000 mgr.y (mgr.24991) 46 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:42:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:07 vm00 bash[22468]: cephadm 2026-03-09T18:42:06.636530+0000 mgr.y (mgr.24991) 47 : cephadm [INF] Upgrade: Setting container_image for all node-exporter 2026-03-09T18:42:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:07 vm00 bash[22468]: cephadm 2026-03-09T18:42:06.637800+0000 mgr.y (mgr.24991) 48 : cephadm [INF] Upgrade: Setting container_image for all prometheus 2026-03-09T18:42:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:07 vm00 bash[22468]: cephadm 2026-03-09T18:42:06.639044+0000 mgr.y (mgr.24991) 49 : cephadm [INF] Upgrade: Setting container_image for all alertmanager 2026-03-09T18:42:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:07 vm00 bash[17468]: cephadm 2026-03-09T18:42:06.570135+0000 mgr.y (mgr.24991) 39 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:42:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:07 vm00 bash[17468]: cephadm 2026-03-09T18:42:06.591800+0000 mgr.y (mgr.24991) 40 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:42:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:07 vm00 bash[17468]: cephadm 2026-03-09T18:42:06.598176+0000 mgr.y (mgr.24991) 41 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:42:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:07 vm00 bash[17468]: cephadm 2026-03-09T18:42:06.604700+0000 mgr.y (mgr.24991) 42 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:42:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:07 vm00 bash[17468]: cephadm 2026-03-09T18:42:06.610925+0000 mgr.y (mgr.24991) 43 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-09T18:42:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:07 vm00 bash[17468]: cephadm 2026-03-09T18:42:06.616995+0000 mgr.y (mgr.24991) 44 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:42:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:07 vm00 bash[17468]: cephadm 2026-03-09T18:42:06.622377+0000 mgr.y (mgr.24991) 45 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:42:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:07 vm00 bash[17468]: cephadm 2026-03-09T18:42:06.630078+0000 mgr.y (mgr.24991) 46 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:42:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:07 vm00 bash[17468]: cephadm 2026-03-09T18:42:06.636530+0000 mgr.y (mgr.24991) 47 : cephadm [INF] Upgrade: Setting container_image for all node-exporter 2026-03-09T18:42:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:07 vm00 bash[17468]: cephadm 2026-03-09T18:42:06.637800+0000 mgr.y (mgr.24991) 48 : cephadm [INF] Upgrade: Setting container_image for all prometheus 2026-03-09T18:42:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:07 vm00 bash[17468]: cephadm 2026-03-09T18:42:06.639044+0000 mgr.y (mgr.24991) 49 : cephadm [INF] Upgrade: Setting container_image for all alertmanager 2026-03-09T18:42:08.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:08 vm08 bash[17774]: cluster 2026-03-09T18:42:06.943110+0000 mgr.y (mgr.24991) 50 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:08.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:08 vm08 bash[17774]: cephadm 2026-03-09T18:42:07.156963+0000 mgr.y (mgr.24991) 51 : cephadm [INF] Upgrade: Updating grafana.a 2026-03-09T18:42:08.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:08 vm08 bash[17774]: cephadm 2026-03-09T18:42:07.192086+0000 mgr.y (mgr.24991) 52 : cephadm [INF] Deploying daemon grafana.a on vm08 2026-03-09T18:42:09.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:08 vm00 bash[22468]: cluster 2026-03-09T18:42:06.943110+0000 mgr.y (mgr.24991) 50 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:09.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:08 vm00 bash[22468]: cephadm 2026-03-09T18:42:07.156963+0000 mgr.y (mgr.24991) 51 : cephadm [INF] Upgrade: Updating grafana.a 2026-03-09T18:42:09.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:08 vm00 bash[22468]: cephadm 2026-03-09T18:42:07.192086+0000 mgr.y (mgr.24991) 52 : cephadm [INF] Deploying daemon grafana.a on vm08 2026-03-09T18:42:09.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:08 vm00 bash[17468]: cluster 2026-03-09T18:42:06.943110+0000 mgr.y (mgr.24991) 50 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:09.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:08 vm00 bash[17468]: cephadm 2026-03-09T18:42:07.156963+0000 mgr.y (mgr.24991) 51 : cephadm [INF] Upgrade: Updating grafana.a 2026-03-09T18:42:09.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:08 vm00 bash[17468]: cephadm 2026-03-09T18:42:07.192086+0000 mgr.y (mgr.24991) 52 : cephadm [INF] Deploying daemon grafana.a on vm08 2026-03-09T18:42:09.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:42:09 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:42:09] "GET /metrics HTTP/1.1" 200 37546 "" "Prometheus/2.51.0" 2026-03-09T18:42:11.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:10 vm00 bash[22468]: cluster 2026-03-09T18:42:08.943453+0000 mgr.y (mgr.24991) 53 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:11.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:10 vm00 bash[17468]: cluster 2026-03-09T18:42:08.943453+0000 mgr.y (mgr.24991) 53 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:11.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:10 vm08 bash[17774]: cluster 2026-03-09T18:42:08.943453+0000 mgr.y (mgr.24991) 53 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:12.165 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:42:12.595 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:42:12.596 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (12m) 13s ago 19m 14.5M - 0.25.0 c8568f914cd2 2a8a29ecee54 2026-03-09T18:42:12.596 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (12m) 13s ago 19m 39.8M - dad864ee21e9 b6a0baf6efb9 2026-03-09T18:42:12.596 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (21s) 13s ago 18m 41.4M - 3.5 e1d6a67b021e ff3da66cebe9 2026-03-09T18:42:12.596 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443,9283,8765 running (19s) 13s ago 22m 287M - 19.2.3-678-ge911bdeb 654f31e6858e e51f08afe84e 2026-03-09T18:42:12.596 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:8443,9283,8765 running (9m) 13s ago 22m 517M - 19.2.3-678-ge911bdeb 654f31e6858e 838266ced7b2 2026-03-09T18:42:12.596 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (22m) 13s ago 22m 71.7M 2048M 17.2.0 e1d6a67b021e 819e8890799a 2026-03-09T18:42:12.596 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (22m) 13s ago 22m 55.5M 2048M 17.2.0 e1d6a67b021e 5b51a6d0bbdd 2026-03-09T18:42:12.596 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (22m) 13s ago 22m 57.2M 2048M 17.2.0 e1d6a67b021e a82073bc5d9c 2026-03-09T18:42:12.596 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (12m) 13s ago 19m 7879k - 1.7.0 72c9c2088986 c2e3e3202fde 2026-03-09T18:42:12.596 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (12m) 13s ago 19m 7824k - 1.7.0 72c9c2088986 7a7d1ed8c801 2026-03-09T18:42:12.596 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (21m) 13s ago 21m 52.2M 4096M 17.2.0 e1d6a67b021e ab692a994bc3 2026-03-09T18:42:12.596 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (21m) 13s ago 21m 53.6M 4096M 17.2.0 e1d6a67b021e d8607e6df9d6 2026-03-09T18:42:12.596 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (21m) 13s ago 21m 48.8M 4096M 17.2.0 e1d6a67b021e 35e072ab4c22 2026-03-09T18:42:12.596 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (21m) 13s ago 21m 54.7M 4096M 17.2.0 e1d6a67b021e 306d680cc55b 2026-03-09T18:42:12.596 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (20m) 13s ago 20m 53.6M 4096M 17.2.0 e1d6a67b021e 781045b06a16 2026-03-09T18:42:12.596 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (20m) 13s ago 20m 52.9M 4096M 17.2.0 e1d6a67b021e 28b71e1b7c1b 2026-03-09T18:42:12.596 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (20m) 13s ago 20m 51.5M 4096M 17.2.0 e1d6a67b021e 80e1a58dd2f5 2026-03-09T18:42:12.596 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (20m) 13s ago 20m 52.3M 4096M 17.2.0 e1d6a67b021e 4f91765b51cf 2026-03-09T18:42:12.596 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (20s) 13s ago 19m 40.3M - 2.51.0 1d3b7f56885b 2dc1789f7bf0 2026-03-09T18:42:12.596 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (18m) 13s ago 18m 87.8M - 17.2.0 e1d6a67b021e 671fa80b7e00 2026-03-09T18:42:12.596 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (18m) 13s ago 18m 88.6M - 17.2.0 e1d6a67b021e 1fbcce983317 2026-03-09T18:42:12.852 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:42:12.852 INFO:teuthology.orchestra.run.vm00.stdout: "mon": { 2026-03-09T18:42:12.852 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-09T18:42:12.852 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:42:12.852 INFO:teuthology.orchestra.run.vm00.stdout: "mgr": { 2026-03-09T18:42:12.852 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:42:12.852 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:42:12.852 INFO:teuthology.orchestra.run.vm00.stdout: "osd": { 2026-03-09T18:42:12.852 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-09T18:42:12.852 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:42:12.852 INFO:teuthology.orchestra.run.vm00.stdout: "mds": {}, 2026-03-09T18:42:12.852 INFO:teuthology.orchestra.run.vm00.stdout: "rgw": { 2026-03-09T18:42:12.852 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-09T18:42:12.852 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:42:12.852 INFO:teuthology.orchestra.run.vm00.stdout: "overall": { 2026-03-09T18:42:12.852 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 13, 2026-03-09T18:42:12.852 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:42:12.852 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:42:12.852 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:42:13.098 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:42:13.098 INFO:teuthology.orchestra.run.vm00.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T18:42:13.098 INFO:teuthology.orchestra.run.vm00.stdout: "in_progress": true, 2026-03-09T18:42:13.098 INFO:teuthology.orchestra.run.vm00.stdout: "which": "Upgrading daemons of type(s) mgr", 2026-03-09T18:42:13.098 INFO:teuthology.orchestra.run.vm00.stdout: "services_complete": [ 2026-03-09T18:42:13.098 INFO:teuthology.orchestra.run.vm00.stdout: "mgr" 2026-03-09T18:42:13.098 INFO:teuthology.orchestra.run.vm00.stdout: ], 2026-03-09T18:42:13.098 INFO:teuthology.orchestra.run.vm00.stdout: "progress": "2/2 daemons upgraded", 2026-03-09T18:42:13.098 INFO:teuthology.orchestra.run.vm00.stdout: "message": "Currently upgrading grafana daemons", 2026-03-09T18:42:13.098 INFO:teuthology.orchestra.run.vm00.stdout: "is_paused": false 2026-03-09T18:42:13.098 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:42:13.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:12 vm00 bash[22468]: cluster 2026-03-09T18:42:10.943982+0000 mgr.y (mgr.24991) 54 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:13.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:12 vm00 bash[22468]: audit 2026-03-09T18:42:11.371637+0000 mgr.y (mgr.24991) 55 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:42:13.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:12 vm00 bash[17468]: cluster 2026-03-09T18:42:10.943982+0000 mgr.y (mgr.24991) 54 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:13.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:12 vm00 bash[17468]: audit 2026-03-09T18:42:11.371637+0000 mgr.y (mgr.24991) 55 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:42:13.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:12 vm08 bash[17774]: cluster 2026-03-09T18:42:10.943982+0000 mgr.y (mgr.24991) 54 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:13.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:12 vm08 bash[17774]: audit 2026-03-09T18:42:11.371637+0000 mgr.y (mgr.24991) 55 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:42:14.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:13 vm00 bash[22468]: audit 2026-03-09T18:42:12.155734+0000 mgr.y (mgr.24991) 56 : audit [DBG] from='client.15261 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:14.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:13 vm00 bash[22468]: audit 2026-03-09T18:42:12.375790+0000 mgr.y (mgr.24991) 57 : audit [DBG] from='client.15267 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:14.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:13 vm00 bash[22468]: audit 2026-03-09T18:42:12.595099+0000 mgr.y (mgr.24991) 58 : audit [DBG] from='client.15273 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:14.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:13 vm00 bash[22468]: audit 2026-03-09T18:42:12.855555+0000 mon.a (mon.0) 1188 : audit [DBG] from='client.? 192.168.123.100:0/1982555989' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:14.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:13 vm00 bash[22468]: audit 2026-03-09T18:42:13.329790+0000 mon.a (mon.0) 1189 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:42:14.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:13 vm00 bash[17468]: audit 2026-03-09T18:42:12.155734+0000 mgr.y (mgr.24991) 56 : audit [DBG] from='client.15261 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:14.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:13 vm00 bash[17468]: audit 2026-03-09T18:42:12.375790+0000 mgr.y (mgr.24991) 57 : audit [DBG] from='client.15267 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:14.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:13 vm00 bash[17468]: audit 2026-03-09T18:42:12.595099+0000 mgr.y (mgr.24991) 58 : audit [DBG] from='client.15273 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:14.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:13 vm00 bash[17468]: audit 2026-03-09T18:42:12.855555+0000 mon.a (mon.0) 1188 : audit [DBG] from='client.? 192.168.123.100:0/1982555989' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:14.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:13 vm00 bash[17468]: audit 2026-03-09T18:42:13.329790+0000 mon.a (mon.0) 1189 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:42:14.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:13 vm08 bash[17774]: audit 2026-03-09T18:42:12.155734+0000 mgr.y (mgr.24991) 56 : audit [DBG] from='client.15261 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:14.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:13 vm08 bash[17774]: audit 2026-03-09T18:42:12.375790+0000 mgr.y (mgr.24991) 57 : audit [DBG] from='client.15267 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:14.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:13 vm08 bash[17774]: audit 2026-03-09T18:42:12.595099+0000 mgr.y (mgr.24991) 58 : audit [DBG] from='client.15273 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:14.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:13 vm08 bash[17774]: audit 2026-03-09T18:42:12.855555+0000 mon.a (mon.0) 1188 : audit [DBG] from='client.? 192.168.123.100:0/1982555989' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:14.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:13 vm08 bash[17774]: audit 2026-03-09T18:42:13.329790+0000 mon.a (mon.0) 1189 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:42:14.934 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:14 vm08 bash[17774]: cluster 2026-03-09T18:42:12.944275+0000 mgr.y (mgr.24991) 59 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:14.934 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:14 vm08 bash[17774]: audit 2026-03-09T18:42:13.101754+0000 mgr.y (mgr.24991) 60 : audit [DBG] from='client.15282 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:15.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:14 vm00 bash[22468]: cluster 2026-03-09T18:42:12.944275+0000 mgr.y (mgr.24991) 59 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:15.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:14 vm00 bash[22468]: audit 2026-03-09T18:42:13.101754+0000 mgr.y (mgr.24991) 60 : audit [DBG] from='client.15282 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:15.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:14 vm00 bash[17468]: cluster 2026-03-09T18:42:12.944275+0000 mgr.y (mgr.24991) 59 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:15.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:14 vm00 bash[17468]: audit 2026-03-09T18:42:13.101754+0000 mgr.y (mgr.24991) 60 : audit [DBG] from='client.15282 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:15.902 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:15 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:15.902 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:42:15 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:15.903 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:42:15 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:15.903 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:42:15 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:15.903 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:42:15 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:15.903 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:42:15 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:15.903 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:42:15 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:15.903 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:15 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:15.903 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:15 vm08 systemd[1]: Stopping Ceph grafana.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:42:15.903 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:42:15 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:16.193 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:42:16 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:16.193 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:42:16 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:16.194 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:42:16 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:16.194 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:15 vm08 bash[17774]: cluster 2026-03-09T18:42:14.944915+0000 mgr.y (mgr.24991) 61 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:16.194 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:16 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:16.194 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:42:16 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:16.194 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:42:16 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:16.194 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:42:16 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:16.194 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:15 vm08 bash[37867]: t=2026-03-09T18:42:15+0000 lvl=info msg="Shutdown started" logger=server reason="System signal: terminated" 2026-03-09T18:42:16.194 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:15 vm08 bash[44642]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-grafana-a 2026-03-09T18:42:16.194 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:15 vm08 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@grafana.a.service: Deactivated successfully. 2026-03-09T18:42:16.194 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:15 vm08 systemd[1]: Stopped Ceph grafana.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:42:16.194 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:16.194 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:42:16 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:16.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:15 vm00 bash[22468]: cluster 2026-03-09T18:42:14.944915+0000 mgr.y (mgr.24991) 61 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:16.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:15 vm00 bash[17468]: cluster 2026-03-09T18:42:14.944915+0000 mgr.y (mgr.24991) 61 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:16.474 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 systemd[1]: Started Ceph grafana.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=settings t=2026-03-09T18:42:16.578127863Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-03-09T18:42:16Z 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=settings t=2026-03-09T18:42:16.578467419Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=settings t=2026-03-09T18:42:16.578579929Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=settings t=2026-03-09T18:42:16.578625484Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=settings t=2026-03-09T18:42:16.578690977Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=settings t=2026-03-09T18:42:16.578748935Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=settings t=2026-03-09T18:42:16.578815069Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=settings t=2026-03-09T18:42:16.578855735Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=settings t=2026-03-09T18:42:16.578907462Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=settings t=2026-03-09T18:42:16.578946094Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=settings t=2026-03-09T18:42:16.579015825Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=settings t=2026-03-09T18:42:16.579078212Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=settings t=2026-03-09T18:42:16.579139206Z level=info msg=Target target=[all] 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=settings t=2026-03-09T18:42:16.579180634Z level=info msg="Path Home" path=/usr/share/grafana 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=settings t=2026-03-09T18:42:16.579231649Z level=info msg="Path Data" path=/var/lib/grafana 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=settings t=2026-03-09T18:42:16.5792855Z level=info msg="Path Logs" path=/var/log/grafana 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=settings t=2026-03-09T18:42:16.579361422Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=settings t=2026-03-09T18:42:16.579419441Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=settings t=2026-03-09T18:42:16.579462731Z level=info msg="App mode production" 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=sqlstore t=2026-03-09T18:42:16.579738218Z level=info msg="Connecting to DB" dbtype=sqlite3 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=sqlstore t=2026-03-09T18:42:16.57981944Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r----- 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.580279079Z level=info msg="Starting DB migrations" 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.585829471Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.60815732Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=22.321697ms 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.610145642Z level=info msg="Executing migration" id="Add uid column to user" 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.613021876Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=2.875704ms 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.614939526Z level=info msg="Executing migration" id="Update uid column values for users" 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.615207839Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=269.454µs 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.616660678Z level=info msg="Executing migration" id="Add unique index user_uid" 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.617368423Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=708.126µs 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.618916852Z level=info msg="Executing migration" id="Add isPublic for dashboard" 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.621240782Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.323219ms 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.664956141Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.665242607Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=285.744µs 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.667035032Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 2026-03-09T18:42:16.833 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.671087449Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=4.052176ms 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.672577479Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.675275951Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.698111ms 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.676952309Z level=info msg="Executing migration" id="Add playlist column created_at" 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.679386455Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.435068ms 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.680605377Z level=info msg="Executing migration" id="Add playlist column updated_at" 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.68396681Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.360832ms 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.685737635Z level=info msg="Executing migration" id="Add column preferences.json_data" 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.688577572Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.839066ms 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.689779221Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.689870472Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=86.081µs 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.69092703Z level=info msg="Executing migration" id="Add preferences index org_id" 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.691554565Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=627.625µs 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.69320257Z level=info msg="Executing migration" id="Add preferences index user_id" 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.69378505Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=582.41µs 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.695152951Z level=info msg="Executing migration" id="Increase tags column to length 4096" 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.695257647Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=105.838µs 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.696499111Z level=info msg="Executing migration" id="Add column uid in team" 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.699205518Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=2.704243ms 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.700560013Z level=info msg="Executing migration" id="Update uid column values in team" 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.700845988Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=286.607µs 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.702236331Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.702971278Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=734.836µs 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.704329681Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.706702763Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=2.370727ms 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.708054213Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.708850103Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=794.989µs 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.710495724Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.710594609Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=98.455µs 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.711727259Z level=info msg="Executing migration" id="add current_reason column related to current_state" 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.714319062Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=2.588938ms 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.71560561Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.71830878Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=2.701218ms 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.720067984Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.723275498Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=3.20417ms 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.72468694Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.730664672Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.977591ms 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.73458975Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.734808439Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=221.304µs 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.73627278Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.739629133Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=3.358768ms 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.740964473Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.743496423Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=2.531319ms 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.745074077Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.745204049Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=130.224µs 2026-03-09T18:42:16.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.746717633Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.74946638Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=2.748005ms 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.750651789Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.752955151Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=2.30278ms 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.754129489Z level=info msg="Executing migration" id="create provenance_type table" 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.754582607Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=452.938µs 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.756071625Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.756658834Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=585.025µs 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.757907862Z level=info msg="Executing migration" id="create alert_image table" 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.75833948Z level=info msg="Migration successfully executed" id="create alert_image table" duration=431.368µs 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.759568953Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.760130252Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=560.86µs 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.761615873Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.761680374Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=64.731µs 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.762650822Z level=info msg="Executing migration" id=create_alert_configuration_history_table 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.763157751Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=506.728µs 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.76437007Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.764915722Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=545.351µs 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.766368141Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.766604953Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.768858292Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.769981435Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=1.127452ms 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.771980898Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.772839566Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=846.836µs 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.774845962Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.778305237Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=3.460117ms 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.779674992Z level=info msg="Executing migration" id="increase max description length to 2048" 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.780393807Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=718.735µs 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.782024691Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.7826442Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=620.311µs 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.783748158Z level=info msg="Executing migration" id="create secrets table" 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.784220041Z level=info msg="Migration successfully executed" id="create secrets table" duration=471.252µs 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.785964396Z level=info msg="Executing migration" id="rename data_keys name column to id" 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.7984448Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=12.479972ms 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.799667399Z level=info msg="Executing migration" id="add name column into data_keys" 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.802486887Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=2.818557ms 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.803670162Z level=info msg="Executing migration" id="copy data_keys id column values into name" 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.803899562Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=229.019µs 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.805504106Z level=info msg="Executing migration" id="rename data_keys name column to label" 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.820208924Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=14.700812ms 2026-03-09T18:42:16.835 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.821883278Z level=info msg="Executing migration" id="rename data_keys id column back to name" 2026-03-09T18:42:17.085 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.833199423Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=11.315454ms 2026-03-09T18:42:17.085 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.835674877Z level=info msg="Executing migration" id="add column hidden to role table" 2026-03-09T18:42:17.085 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.838111458Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=2.43577ms 2026-03-09T18:42:17.085 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.839550683Z level=info msg="Executing migration" id="permission kind migration" 2026-03-09T18:42:17.085 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.842342149Z level=info msg="Migration successfully executed" id="permission kind migration" duration=2.792378ms 2026-03-09T18:42:17.085 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.843504104Z level=info msg="Executing migration" id="permission attribute migration" 2026-03-09T18:42:17.085 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.845930557Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=2.425862ms 2026-03-09T18:42:17.085 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.847111567Z level=info msg="Executing migration" id="permission identifier migration" 2026-03-09T18:42:17.085 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.849488076Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=2.376409ms 2026-03-09T18:42:17.085 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.850710986Z level=info msg="Executing migration" id="add permission identifier index" 2026-03-09T18:42:17.085 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.851321388Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=610.382µs 2026-03-09T18:42:17.085 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.853051908Z level=info msg="Executing migration" id="add permission action scope role_id index" 2026-03-09T18:42:17.085 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.853588283Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=536.233µs 2026-03-09T18:42:17.085 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.854929153Z level=info msg="Executing migration" id="remove permission role_id action scope index" 2026-03-09T18:42:17.085 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.855487559Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=558.677µs 2026-03-09T18:42:17.085 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.856533786Z level=info msg="Executing migration" id="create query_history table v1" 2026-03-09T18:42:17.085 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.857051576Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=514.453µs 2026-03-09T18:42:17.085 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.858556834Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 2026-03-09T18:42:17.085 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.859071387Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=514.333µs 2026-03-09T18:42:17.085 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.860551026Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 2026-03-09T18:42:17.085 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.860650653Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=101.851µs 2026-03-09T18:42:17.085 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.862017472Z level=info msg="Executing migration" id="rbac disabled migrator" 2026-03-09T18:42:17.085 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.862079788Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=62.868µs 2026-03-09T18:42:17.085 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.864189439Z level=info msg="Executing migration" id="teams permissions migration" 2026-03-09T18:42:17.086 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.864615696Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=426.007µs 2026-03-09T18:42:17.086 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.865903447Z level=info msg="Executing migration" id="dashboard permissions" 2026-03-09T18:42:17.086 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.86864576Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=2.742523ms 2026-03-09T18:42:17.086 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.869994926Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 2026-03-09T18:42:17.086 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.871437057Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=1.44204ms 2026-03-09T18:42:17.086 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.872468647Z level=info msg="Executing migration" id="drop managed folder create actions" 2026-03-09T18:42:17.086 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.872646281Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=177.453µs 2026-03-09T18:42:17.086 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.874213304Z level=info msg="Executing migration" id="alerting notification permissions" 2026-03-09T18:42:17.086 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.874503377Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=290.013µs 2026-03-09T18:42:17.086 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.875554254Z level=info msg="Executing migration" id="create query_history_star table v1" 2026-03-09T18:42:17.086 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.876052818Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=494.386µs 2026-03-09T18:42:17.086 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.877513182Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 2026-03-09T18:42:17.086 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.878105961Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=592.618µs 2026-03-09T18:42:17.086 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.879654009Z level=info msg="Executing migration" id="add column org_id in query_history_star" 2026-03-09T18:42:17.086 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.882186871Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=2.532823ms 2026-03-09T18:42:17.086 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.88328171Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 2026-03-09T18:42:17.086 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.883366629Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=85.219µs 2026-03-09T18:42:17.086 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.884385387Z level=info msg="Executing migration" id="create correlation table v1" 2026-03-09T18:42:17.086 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.884922391Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=536.814µs 2026-03-09T18:42:17.086 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.886115335Z level=info msg="Executing migration" id="add index correlations.uid" 2026-03-09T18:42:17.086 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.886629528Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=514.194µs 2026-03-09T18:42:17.086 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.888435439Z level=info msg="Executing migration" id="add index correlations.source_uid" 2026-03-09T18:42:17.086 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.889024862Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=589.433µs 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.890263141Z level=info msg="Executing migration" id="add correlation config column" 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.892993581Z level=info msg="Migration successfully executed" id="add correlation config column" duration=2.732925ms 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.894202796Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.894719583Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=515.726µs 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.896041517Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.896538147Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=496.63µs 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.897680155Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.904759708Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=7.079432ms 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.905870508Z level=info msg="Executing migration" id="create correlation v2" 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.906441548Z level=info msg="Migration successfully executed" id="create correlation v2" duration=568.214µs 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.907982072Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.908487127Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=505.035µs 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.909724242Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.910226674Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=499.836µs 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.911831708Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.912341462Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=509.705µs 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.913619565Z level=info msg="Executing migration" id="copy correlation v1 to v2" 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.913841641Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=218.459µs 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.914881457Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.915374952Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=493.865µs 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.916664454Z level=info msg="Executing migration" id="add provisioning column" 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.919414654Z level=info msg="Migration successfully executed" id="add provisioning column" duration=2.747734ms 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.921900228Z level=info msg="Executing migration" id="create entity_events table" 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.922335141Z level=info msg="Migration successfully executed" id="create entity_events table" duration=435.686µs 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.923462742Z level=info msg="Executing migration" id="create dashboard public config v1" 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.924006229Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=543.326µs 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.925556712Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.925791502Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.926731962Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.926957484Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.928158733Z level=info msg="Executing migration" id="Drop old dashboard public config table" 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.928583788Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=424.824µs 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.92999513Z level=info msg="Executing migration" id="recreate dashboard public config v1" 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.930482433Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=486.823µs 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.93137302Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.931872536Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=504.034µs 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.933069597Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.93361659Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=546.872µs 2026-03-09T18:42:17.087 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.935010931Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.935496049Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=485.108µs 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.936384071Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.936896842Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=512.801µs 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.93817814Z level=info msg="Executing migration" id="Drop public config table" 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.938652388Z level=info msg="Migration successfully executed" id="Drop public config table" duration=473.878µs 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.939664272Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.940343223Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=678.79µs 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.941720341Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.942345582Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=621.694µs 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.943347638Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.944166491Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=813.253µs 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.946149292Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.947068765Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=920.394µs 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.948553333Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.957441613Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=8.888069ms 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.958654473Z level=info msg="Executing migration" id="add annotations_enabled column" 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.962009925Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=3.350743ms 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.963975915Z level=info msg="Executing migration" id="add time_selection_enabled column" 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.968174716Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=4.19821ms 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.969665426Z level=info msg="Executing migration" id="delete orphaned public dashboards" 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.969941694Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=276.97µs 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.971054086Z level=info msg="Executing migration" id="add share column" 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.973980514Z level=info msg="Migration successfully executed" id="add share column" duration=2.927921ms 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.97518518Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.975399892Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=214.652µs 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.976719813Z level=info msg="Executing migration" id="create file table" 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.977322692Z level=info msg="Migration successfully executed" id="create file table" duration=604.492µs 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.978879786Z level=info msg="Executing migration" id="file table idx: path natural pk" 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.979443251Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=560.558µs 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.980833694Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.981342677Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=509.053µs 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.982916333Z level=info msg="Executing migration" id="create file_meta table" 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.983362418Z level=info msg="Migration successfully executed" id="create file_meta table" duration=443.411µs 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.984910777Z level=info msg="Executing migration" id="file table idx: path key" 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.985450597Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=539.8µs 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.986794633Z level=info msg="Executing migration" id="set path collation in file table" 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.986840108Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=45.996µs 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.988246721Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 2026-03-09T18:42:17.088 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.988292448Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=46.227µs 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.989543099Z level=info msg="Executing migration" id="managed permissions migration" 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.990961675Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=1.418635ms 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.992200504Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.99307421Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=873.845µs 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.994262073Z level=info msg="Executing migration" id="RBAC action name migrator" 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.995006967Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=744.855µs 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:16 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:16.996350632Z level=info msg="Executing migration" id="Add UID column to playlist" 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.000202604Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=3.85137ms 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.001519018Z level=info msg="Executing migration" id="Update uid column values in playlist" 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.001657879Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=137.408µs 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.002750003Z level=info msg="Executing migration" id="Add index for uid in playlist" 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.003274205Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=521.877µs 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.00461289Z level=info msg="Executing migration" id="update group index for alert rules" 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.004952186Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=336.591µs 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.00649821Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.007114814Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=616.415µs 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.008241583Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.008639919Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=398.226µs 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.00996481Z level=info msg="Executing migration" id="add action column to seed_assignment" 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.013097995Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=3.134809ms 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.014306638Z level=info msg="Executing migration" id="add scope column to seed_assignment" 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.017394218Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=3.086959ms 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.018906289Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.019769094Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=862.705µs 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.020843925Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.051668431Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=30.820568ms 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.05334529Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.054096136Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=745.386µs 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.05542304Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.056095318Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=672.358µs 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.057467197Z level=info msg="Executing migration" id="add primary key to seed_assigment" 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.066155392Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=8.684829ms 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.068283567Z level=info msg="Executing migration" id="add origin column to seed_assignment" 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.070869196Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=2.586011ms 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.072423496Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.07262844Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=205.213µs 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.073917874Z level=info msg="Executing migration" id="prevent seeding OnCall access" 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.074038369Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=120.846µs 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.07541206Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 2026-03-09T18:42:17.089 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.075920412Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=506.468µs 2026-03-09T18:42:17.090 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.077775435Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 2026-03-09T18:42:17.090 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.078703964Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=931.395µs 2026-03-09T18:42:17.090 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.079821866Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 2026-03-09T18:42:17.090 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.080031269Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=207.019µs 2026-03-09T18:42:17.090 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.08151658Z level=info msg="Executing migration" id="create folder table" 2026-03-09T18:42:17.090 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.082066729Z level=info msg="Migration successfully executed" id="create folder table" duration=549.788µs 2026-03-09T18:42:17.090 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.083127946Z level=info msg="Executing migration" id="Add index for parent_uid" 2026-03-09T18:42:17.090 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.083650724Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=526.525µs 2026-03-09T18:42:17.090 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.084943825Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 2026-03-09T18:42:17.385 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:17 vm08 bash[17774]: audit 2026-03-09T18:42:16.245390+0000 mon.a (mon.0) 1190 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:17.385 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:17 vm08 bash[17774]: audit 2026-03-09T18:42:16.252913+0000 mon.a (mon.0) 1191 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:17.385 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:17 vm08 bash[17774]: audit 2026-03-09T18:42:16.253915+0000 mon.a (mon.0) 1192 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.085445143Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=501.259µs 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.088005778Z level=info msg="Executing migration" id="Update folder title length" 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.088047526Z level=info msg="Migration successfully executed" id="Update folder title length" duration=42.529µs 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.089201726Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.08999993Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=798.335µs 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.091302509Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.092091666Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=787.003µs 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.093793492Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.094498061Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=704.389µs 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.095807994Z level=info msg="Executing migration" id="Sync dashboard and folder table" 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.096266271Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=457.997µs 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.097472499Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.097701047Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=228.537µs 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.098956497Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.099658222Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=701.273µs 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.1007846Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.101576142Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=795.659µs 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.102891595Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.10343452Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=546.322µs 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.104671767Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.1052554Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=583.623µs 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.106402928Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.106903936Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=496.59µs 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.107875425Z level=info msg="Executing migration" id="create anon_device table" 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.108337019Z level=info msg="Migration successfully executed" id="create anon_device table" duration=461.434µs 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.109819865Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.110389862Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=569.967µs 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.112620448Z level=info msg="Executing migration" id="add index anon_device.updated_at" 2026-03-09T18:42:17.386 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.113255406Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=635.099µs 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.114951251Z level=info msg="Executing migration" id="create signing_key table" 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.115429988Z level=info msg="Migration successfully executed" id="create signing_key table" duration=478.577µs 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.116917191Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.117468252Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=550.962µs 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.119103734Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.126351442Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=7.244833ms 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.130219575Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.130680867Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=464.589µs 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.131881346Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.135503877Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=3.622853ms 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.136716847Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.137944015Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.227467ms 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.144013948Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.144824124Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=810.386µs 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.146236439Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.146772592Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=536.194µs 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.147804715Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.148376595Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=575.137µs 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.149898595Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.150631186Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=732.181µs 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.151717821Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.152406459Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=688.218µs 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.153588723Z level=info msg="Executing migration" id="create sso_setting table" 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.154263807Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=674.393µs 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.155612341Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.156347368Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=735.247µs 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.157465541Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.157874155Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=408.625µs 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.15909969Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.159157228Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=57.968µs 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.16017355Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.16300442Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=2.83071ms 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.164102485Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.166836143Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=2.733327ms 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.167932556Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 2026-03-09T18:42:17.387 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.168152597Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=219.811µs 2026-03-09T18:42:17.388 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=migrator t=2026-03-09T18:42:17.169495622Z level=info msg="migrations completed" performed=169 skipped=378 duration=583.720042ms 2026-03-09T18:42:17.388 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=sqlstore t=2026-03-09T18:42:17.17003389Z level=info msg="Created default organization" 2026-03-09T18:42:17.388 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=secrets t=2026-03-09T18:42:17.17356508Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 2026-03-09T18:42:17.388 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=plugin.store t=2026-03-09T18:42:17.184959832Z level=info msg="Loading plugins..." 2026-03-09T18:42:17.388 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=local.finder t=2026-03-09T18:42:17.225856253Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 2026-03-09T18:42:17.388 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=plugin.store t=2026-03-09T18:42:17.225949998Z level=info msg="Plugins loaded" count=55 duration=40.991147ms 2026-03-09T18:42:17.388 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=query_data t=2026-03-09T18:42:17.227929835Z level=info msg="Query Service initialization" 2026-03-09T18:42:17.388 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=live.push_http t=2026-03-09T18:42:17.230274544Z level=info msg="Live Push Gateway initialization" 2026-03-09T18:42:17.388 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=ngalert.migration t=2026-03-09T18:42:17.232682412Z level=info msg=Starting 2026-03-09T18:42:17.388 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=ngalert t=2026-03-09T18:42:17.237612571Z level=warn msg="Unexpected number of rows updating alert configuration history" rows=0 org=1 hash=not-yet-calculated 2026-03-09T18:42:17.388 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=ngalert.state.manager t=2026-03-09T18:42:17.238411737Z level=info msg="Running in alternative execution of Error/NoData mode" 2026-03-09T18:42:17.388 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=infra.usagestats.collector t=2026-03-09T18:42:17.24001059Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 2026-03-09T18:42:17.388 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=provisioning.datasources t=2026-03-09T18:42:17.242856227Z level=info msg="deleted datasource based on configuration" name=Dashboard1 2026-03-09T18:42:17.388 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=provisioning.datasources t=2026-03-09T18:42:17.243211123Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596 2026-03-09T18:42:17.388 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=provisioning.alerting t=2026-03-09T18:42:17.259038642Z level=info msg="starting to provision alerting" 2026-03-09T18:42:17.388 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=provisioning.alerting t=2026-03-09T18:42:17.259100167Z level=info msg="finished to provision alerting" 2026-03-09T18:42:17.388 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=grafanaStorageLogger t=2026-03-09T18:42:17.260840014Z level=info msg="Storage starting" 2026-03-09T18:42:17.388 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=http.server t=2026-03-09T18:42:17.262768204Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA 2026-03-09T18:42:17.388 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=http.server t=2026-03-09T18:42:17.263636299Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=https subUrl= socket= 2026-03-09T18:42:17.388 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=ngalert.state.manager t=2026-03-09T18:42:17.265283773Z level=info msg="Warming state cache for startup" 2026-03-09T18:42:17.388 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=ngalert.state.manager t=2026-03-09T18:42:17.269570859Z level=info msg="State cache has been initialized" states=0 duration=4.286685ms 2026-03-09T18:42:17.388 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=provisioning.dashboard t=2026-03-09T18:42:17.270670798Z level=info msg="starting to provision dashboards" 2026-03-09T18:42:17.388 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=ngalert.multiorg.alertmanager t=2026-03-09T18:42:17.278051916Z level=info msg="Starting MultiOrg Alertmanager" 2026-03-09T18:42:17.388 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=ngalert.scheduler t=2026-03-09T18:42:17.278269403Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 2026-03-09T18:42:17.388 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=ticker t=2026-03-09T18:42:17.278485768Z level=info msg=starting first_tick=2026-03-09T18:42:20Z 2026-03-09T18:42:17.388 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=sqlstore.transactions t=2026-03-09T18:42:17.287847164Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 2026-03-09T18:42:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:17 vm00 bash[22468]: audit 2026-03-09T18:42:16.245390+0000 mon.a (mon.0) 1190 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:17 vm00 bash[22468]: audit 2026-03-09T18:42:16.252913+0000 mon.a (mon.0) 1191 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:17 vm00 bash[22468]: audit 2026-03-09T18:42:16.253915+0000 mon.a (mon.0) 1192 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:42:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:17 vm00 bash[17468]: audit 2026-03-09T18:42:16.245390+0000 mon.a (mon.0) 1190 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:17 vm00 bash[17468]: audit 2026-03-09T18:42:16.252913+0000 mon.a (mon.0) 1191 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:17 vm00 bash[17468]: audit 2026-03-09T18:42:16.253915+0000 mon.a (mon.0) 1192 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:42:17.724 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=plugins.update.checker t=2026-03-09T18:42:17.3862285Z level=info msg="Update check succeeded" duration=108.900859ms 2026-03-09T18:42:17.724 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=provisioning.dashboard t=2026-03-09T18:42:17.462428932Z level=info msg="finished to provision dashboards" 2026-03-09T18:42:17.724 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=grafana-apiserver t=2026-03-09T18:42:17.470048366Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 2026-03-09T18:42:17.724 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:17 vm08 bash[44768]: logger=grafana-apiserver t=2026-03-09T18:42:17.470393102Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 2026-03-09T18:42:18.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:18 vm08 bash[17774]: cluster 2026-03-09T18:42:16.945336+0000 mgr.y (mgr.24991) 62 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:19.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:18 vm00 bash[22468]: cluster 2026-03-09T18:42:16.945336+0000 mgr.y (mgr.24991) 62 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:19.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:18 vm00 bash[17468]: cluster 2026-03-09T18:42:16.945336+0000 mgr.y (mgr.24991) 62 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:19.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:42:19 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:42:19] "GET /metrics HTTP/1.1" 200 37546 "" "Prometheus/2.51.0" 2026-03-09T18:42:20.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:20 vm08 bash[17774]: cluster 2026-03-09T18:42:18.945653+0000 mgr.y (mgr.24991) 63 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:21.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:20 vm00 bash[22468]: cluster 2026-03-09T18:42:18.945653+0000 mgr.y (mgr.24991) 63 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:21.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:20 vm00 bash[17468]: cluster 2026-03-09T18:42:18.945653+0000 mgr.y (mgr.24991) 63 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:23.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:22 vm00 bash[22468]: cluster 2026-03-09T18:42:20.946155+0000 mgr.y (mgr.24991) 64 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:23.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:22 vm00 bash[22468]: audit 2026-03-09T18:42:21.373399+0000 mgr.y (mgr.24991) 65 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:42:23.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:22 vm00 bash[22468]: audit 2026-03-09T18:42:21.748800+0000 mon.a (mon.0) 1193 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:23.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:22 vm00 bash[22468]: audit 2026-03-09T18:42:21.755600+0000 mon.a (mon.0) 1194 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:23.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:22 vm00 bash[22468]: audit 2026-03-09T18:42:22.359102+0000 mon.a (mon.0) 1195 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:23.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:22 vm00 bash[22468]: audit 2026-03-09T18:42:22.364729+0000 mon.a (mon.0) 1196 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:23.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:22 vm00 bash[17468]: cluster 2026-03-09T18:42:20.946155+0000 mgr.y (mgr.24991) 64 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:23.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:22 vm00 bash[17468]: audit 2026-03-09T18:42:21.373399+0000 mgr.y (mgr.24991) 65 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:42:23.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:22 vm00 bash[17468]: audit 2026-03-09T18:42:21.748800+0000 mon.a (mon.0) 1193 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:23.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:22 vm00 bash[17468]: audit 2026-03-09T18:42:21.755600+0000 mon.a (mon.0) 1194 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:23.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:22 vm00 bash[17468]: audit 2026-03-09T18:42:22.359102+0000 mon.a (mon.0) 1195 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:23.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:22 vm00 bash[17468]: audit 2026-03-09T18:42:22.364729+0000 mon.a (mon.0) 1196 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:23.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:22 vm08 bash[17774]: cluster 2026-03-09T18:42:20.946155+0000 mgr.y (mgr.24991) 64 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:23.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:22 vm08 bash[17774]: audit 2026-03-09T18:42:21.373399+0000 mgr.y (mgr.24991) 65 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:42:23.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:22 vm08 bash[17774]: audit 2026-03-09T18:42:21.748800+0000 mon.a (mon.0) 1193 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:23.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:22 vm08 bash[17774]: audit 2026-03-09T18:42:21.755600+0000 mon.a (mon.0) 1194 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:23.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:22 vm08 bash[17774]: audit 2026-03-09T18:42:22.359102+0000 mon.a (mon.0) 1195 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:23.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:22 vm08 bash[17774]: audit 2026-03-09T18:42:22.364729+0000 mon.a (mon.0) 1196 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:25.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:24 vm00 bash[22468]: cluster 2026-03-09T18:42:22.946457+0000 mgr.y (mgr.24991) 66 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:25.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:24 vm00 bash[17468]: cluster 2026-03-09T18:42:22.946457+0000 mgr.y (mgr.24991) 66 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:25.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:24 vm08 bash[17774]: cluster 2026-03-09T18:42:22.946457+0000 mgr.y (mgr.24991) 66 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:27.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:26 vm00 bash[22468]: cluster 2026-03-09T18:42:24.947067+0000 mgr.y (mgr.24991) 67 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:27.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:26 vm00 bash[17468]: cluster 2026-03-09T18:42:24.947067+0000 mgr.y (mgr.24991) 67 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:27.185 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:26 vm08 bash[17774]: cluster 2026-03-09T18:42:24.947067+0000 mgr.y (mgr.24991) 67 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:29.051 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:28 vm08 bash[17774]: cluster 2026-03-09T18:42:26.947410+0000 mgr.y (mgr.24991) 68 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:29.051 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:28 vm08 bash[17774]: audit 2026-03-09T18:42:28.330283+0000 mon.a (mon.0) 1197 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:42:29.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:28 vm00 bash[22468]: cluster 2026-03-09T18:42:26.947410+0000 mgr.y (mgr.24991) 68 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:29.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:28 vm00 bash[22468]: audit 2026-03-09T18:42:28.330283+0000 mon.a (mon.0) 1197 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:42:29.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:28 vm00 bash[17468]: cluster 2026-03-09T18:42:26.947410+0000 mgr.y (mgr.24991) 68 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:29.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:28 vm00 bash[17468]: audit 2026-03-09T18:42:28.330283+0000 mon.a (mon.0) 1197 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:42:29.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:42:29 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:42:29] "GET /metrics HTTP/1.1" 200 37547 "" "Prometheus/2.51.0" 2026-03-09T18:42:30.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: cluster 2026-03-09T18:42:28.947815+0000 mgr.y (mgr.24991) 69 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:30.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.118097+0000 mon.a (mon.0) 1198 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:30.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.124919+0000 mon.a (mon.0) 1199 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:30.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.128083+0000 mon.a (mon.0) 1200 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:42:30.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.129437+0000 mon.a (mon.0) 1201 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:42:30.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.134997+0000 mon.a (mon.0) 1202 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:30.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.154862+0000 mon.a (mon.0) 1203 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T18:42:30.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.155350+0000 mgr.y (mgr.24991) 70 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T18:42:30.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.185313+0000 mon.a (mon.0) 1204 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:42:30.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.186639+0000 mon.a (mon.0) 1205 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.187574+0000 mon.a (mon.0) 1206 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.188367+0000 mon.a (mon.0) 1207 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.189077+0000 mon.a (mon.0) 1208 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.189673+0000 mon.a (mon.0) 1209 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.190769+0000 mon.a (mon.0) 1210 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.191459+0000 mon.a (mon.0) 1211 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.192292+0000 mon.a (mon.0) 1212 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.192989+0000 mon.a (mon.0) 1213 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.193682+0000 mon.a (mon.0) 1214 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.194356+0000 mon.a (mon.0) 1215 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.195023+0000 mon.a (mon.0) 1216 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.195625+0000 mon.a (mon.0) 1217 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.196245+0000 mon.a (mon.0) 1218 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: cephadm 2026-03-09T18:42:29.196637+0000 mgr.y (mgr.24991) 71 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.205434+0000 mon.a (mon.0) 1219 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.209182+0000 mon.a (mon.0) 1220 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.214934+0000 mon.a (mon.0) 1221 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.217122+0000 mon.a (mon.0) 1222 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.221541+0000 mon.a (mon.0) 1223 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.222250+0000 mon.a (mon.0) 1224 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.227240+0000 mon.a (mon.0) 1225 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.227975+0000 mon.a (mon.0) 1226 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.228431+0000 mon.a (mon.0) 1227 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.235110+0000 mon.a (mon.0) 1228 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.235944+0000 mon.a (mon.0) 1229 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.236408+0000 mon.a (mon.0) 1230 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.242308+0000 mon.a (mon.0) 1231 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.243049+0000 mon.a (mon.0) 1232 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.243461+0000 mon.a (mon.0) 1233 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.248989+0000 mon.a (mon.0) 1234 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.249776+0000 mon.a (mon.0) 1235 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.250232+0000 mon.a (mon.0) 1236 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.256218+0000 mon.a (mon.0) 1237 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.256993+0000 mon.a (mon.0) 1238 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.263006+0000 mon.a (mon.0) 1239 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.263834+0000 mon.a (mon.0) 1240 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.264244+0000 mon.a (mon.0) 1241 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.264622+0000 mon.a (mon.0) 1242 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.265031+0000 mon.a (mon.0) 1243 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.265406+0000 mon.a (mon.0) 1244 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.265827+0000 mon.a (mon.0) 1245 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: cephadm 2026-03-09T18:42:29.266161+0000 mgr.y (mgr.24991) 72 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.266434+0000 mon.a (mon.0) 1246 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.271528+0000 mon.a (mon.0) 1247 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:30 vm00 bash[22468]: audit 2026-03-09T18:42:29.272127+0000 mon.a (mon.0) 1248 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: cluster 2026-03-09T18:42:28.947815+0000 mgr.y (mgr.24991) 69 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.118097+0000 mon.a (mon.0) 1198 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.124919+0000 mon.a (mon.0) 1199 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.128083+0000 mon.a (mon.0) 1200 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.129437+0000 mon.a (mon.0) 1201 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.134997+0000 mon.a (mon.0) 1202 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.154862+0000 mon.a (mon.0) 1203 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.155350+0000 mgr.y (mgr.24991) 70 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.185313+0000 mon.a (mon.0) 1204 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.186639+0000 mon.a (mon.0) 1205 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.187574+0000 mon.a (mon.0) 1206 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.188367+0000 mon.a (mon.0) 1207 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.189077+0000 mon.a (mon.0) 1208 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.189673+0000 mon.a (mon.0) 1209 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.190769+0000 mon.a (mon.0) 1210 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.191459+0000 mon.a (mon.0) 1211 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.192292+0000 mon.a (mon.0) 1212 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.192989+0000 mon.a (mon.0) 1213 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.193682+0000 mon.a (mon.0) 1214 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.194356+0000 mon.a (mon.0) 1215 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.195023+0000 mon.a (mon.0) 1216 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.195625+0000 mon.a (mon.0) 1217 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.196245+0000 mon.a (mon.0) 1218 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: cephadm 2026-03-09T18:42:29.196637+0000 mgr.y (mgr.24991) 71 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.205434+0000 mon.a (mon.0) 1219 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.209182+0000 mon.a (mon.0) 1220 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.214934+0000 mon.a (mon.0) 1221 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.217122+0000 mon.a (mon.0) 1222 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.221541+0000 mon.a (mon.0) 1223 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.222250+0000 mon.a (mon.0) 1224 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.227240+0000 mon.a (mon.0) 1225 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.227975+0000 mon.a (mon.0) 1226 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.228431+0000 mon.a (mon.0) 1227 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.235110+0000 mon.a (mon.0) 1228 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.235944+0000 mon.a (mon.0) 1229 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.236408+0000 mon.a (mon.0) 1230 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.242308+0000 mon.a (mon.0) 1231 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.243049+0000 mon.a (mon.0) 1232 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.243461+0000 mon.a (mon.0) 1233 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.248989+0000 mon.a (mon.0) 1234 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.249776+0000 mon.a (mon.0) 1235 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.250232+0000 mon.a (mon.0) 1236 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.256218+0000 mon.a (mon.0) 1237 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.256993+0000 mon.a (mon.0) 1238 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.263006+0000 mon.a (mon.0) 1239 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.263834+0000 mon.a (mon.0) 1240 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.264244+0000 mon.a (mon.0) 1241 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.264622+0000 mon.a (mon.0) 1242 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.265031+0000 mon.a (mon.0) 1243 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.265406+0000 mon.a (mon.0) 1244 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.265827+0000 mon.a (mon.0) 1245 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: cephadm 2026-03-09T18:42:29.266161+0000 mgr.y (mgr.24991) 72 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.266434+0000 mon.a (mon.0) 1246 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.271528+0000 mon.a (mon.0) 1247 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:42:30.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:30 vm00 bash[17468]: audit 2026-03-09T18:42:29.272127+0000 mon.a (mon.0) 1248 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:42:30.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: cluster 2026-03-09T18:42:28.947815+0000 mgr.y (mgr.24991) 69 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.118097+0000 mon.a (mon.0) 1198 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.124919+0000 mon.a (mon.0) 1199 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.128083+0000 mon.a (mon.0) 1200 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.129437+0000 mon.a (mon.0) 1201 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.134997+0000 mon.a (mon.0) 1202 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.154862+0000 mon.a (mon.0) 1203 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.155350+0000 mgr.y (mgr.24991) 70 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.185313+0000 mon.a (mon.0) 1204 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.186639+0000 mon.a (mon.0) 1205 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.187574+0000 mon.a (mon.0) 1206 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.188367+0000 mon.a (mon.0) 1207 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.189077+0000 mon.a (mon.0) 1208 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.189673+0000 mon.a (mon.0) 1209 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.190769+0000 mon.a (mon.0) 1210 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.191459+0000 mon.a (mon.0) 1211 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.192292+0000 mon.a (mon.0) 1212 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.192989+0000 mon.a (mon.0) 1213 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.193682+0000 mon.a (mon.0) 1214 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.194356+0000 mon.a (mon.0) 1215 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.195023+0000 mon.a (mon.0) 1216 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.195625+0000 mon.a (mon.0) 1217 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.196245+0000 mon.a (mon.0) 1218 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: cephadm 2026-03-09T18:42:29.196637+0000 mgr.y (mgr.24991) 71 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.205434+0000 mon.a (mon.0) 1219 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.209182+0000 mon.a (mon.0) 1220 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.214934+0000 mon.a (mon.0) 1221 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.217122+0000 mon.a (mon.0) 1222 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.221541+0000 mon.a (mon.0) 1223 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.222250+0000 mon.a (mon.0) 1224 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.227240+0000 mon.a (mon.0) 1225 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.227975+0000 mon.a (mon.0) 1226 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.228431+0000 mon.a (mon.0) 1227 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.235110+0000 mon.a (mon.0) 1228 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.235944+0000 mon.a (mon.0) 1229 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.236408+0000 mon.a (mon.0) 1230 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.242308+0000 mon.a (mon.0) 1231 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.243049+0000 mon.a (mon.0) 1232 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.243461+0000 mon.a (mon.0) 1233 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.248989+0000 mon.a (mon.0) 1234 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.249776+0000 mon.a (mon.0) 1235 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.250232+0000 mon.a (mon.0) 1236 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.256218+0000 mon.a (mon.0) 1237 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.256993+0000 mon.a (mon.0) 1238 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.263006+0000 mon.a (mon.0) 1239 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.263834+0000 mon.a (mon.0) 1240 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.264244+0000 mon.a (mon.0) 1241 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.264622+0000 mon.a (mon.0) 1242 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.265031+0000 mon.a (mon.0) 1243 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.265406+0000 mon.a (mon.0) 1244 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.265827+0000 mon.a (mon.0) 1245 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: cephadm 2026-03-09T18:42:29.266161+0000 mgr.y (mgr.24991) 72 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.266434+0000 mon.a (mon.0) 1246 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:42:30.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.271528+0000 mon.a (mon.0) 1247 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:42:30.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:30 vm08 bash[17774]: audit 2026-03-09T18:42:29.272127+0000 mon.a (mon.0) 1248 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:42:32.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:32 vm08 bash[17774]: cluster 2026-03-09T18:42:30.948380+0000 mgr.y (mgr.24991) 73 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:32.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:32 vm08 bash[17774]: audit 2026-03-09T18:42:31.384208+0000 mgr.y (mgr.24991) 74 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:42:33.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:32 vm00 bash[22468]: cluster 2026-03-09T18:42:30.948380+0000 mgr.y (mgr.24991) 73 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:33.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:32 vm00 bash[22468]: audit 2026-03-09T18:42:31.384208+0000 mgr.y (mgr.24991) 74 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:42:33.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:32 vm00 bash[17468]: cluster 2026-03-09T18:42:30.948380+0000 mgr.y (mgr.24991) 73 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:33.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:32 vm00 bash[17468]: audit 2026-03-09T18:42:31.384208+0000 mgr.y (mgr.24991) 74 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:42:34.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:34 vm00 bash[22468]: cluster 2026-03-09T18:42:32.948703+0000 mgr.y (mgr.24991) 75 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:34.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:34 vm00 bash[22468]: audit 2026-03-09T18:42:33.320634+0000 mon.a (mon.0) 1249 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:34.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:34 vm00 bash[17468]: cluster 2026-03-09T18:42:32.948703+0000 mgr.y (mgr.24991) 75 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:34.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:34 vm00 bash[17468]: audit 2026-03-09T18:42:33.320634+0000 mon.a (mon.0) 1249 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:34.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:34 vm08 bash[17774]: cluster 2026-03-09T18:42:32.948703+0000 mgr.y (mgr.24991) 75 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:34.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:34 vm08 bash[17774]: audit 2026-03-09T18:42:33.320634+0000 mon.a (mon.0) 1249 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:35.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:35 vm08 bash[17774]: audit 2026-03-09T18:42:34.674571+0000 mon.a (mon.0) 1250 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:35.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:35 vm08 bash[17774]: audit 2026-03-09T18:42:34.680419+0000 mon.a (mon.0) 1251 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:35.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:35 vm08 bash[17774]: audit 2026-03-09T18:42:34.681538+0000 mon.a (mon.0) 1252 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:42:35.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:35 vm08 bash[17774]: audit 2026-03-09T18:42:34.682182+0000 mon.a (mon.0) 1253 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:42:35.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:35 vm08 bash[17774]: audit 2026-03-09T18:42:34.688937+0000 mon.a (mon.0) 1254 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:35.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:35 vm08 bash[17774]: audit 2026-03-09T18:42:34.732163+0000 mon.a (mon.0) 1255 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:42:35.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:35 vm08 bash[17774]: audit 2026-03-09T18:42:34.733569+0000 mon.a (mon.0) 1256 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:42:35.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:35 vm08 bash[17774]: audit 2026-03-09T18:42:34.734086+0000 mon.a (mon.0) 1257 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:42:35.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:35 vm08 bash[17774]: audit 2026-03-09T18:42:34.741706+0000 mon.a (mon.0) 1258 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:36.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:35 vm00 bash[22468]: audit 2026-03-09T18:42:34.674571+0000 mon.a (mon.0) 1250 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:36.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:35 vm00 bash[22468]: audit 2026-03-09T18:42:34.680419+0000 mon.a (mon.0) 1251 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:36.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:35 vm00 bash[22468]: audit 2026-03-09T18:42:34.681538+0000 mon.a (mon.0) 1252 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:42:36.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:35 vm00 bash[22468]: audit 2026-03-09T18:42:34.682182+0000 mon.a (mon.0) 1253 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:42:36.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:35 vm00 bash[22468]: audit 2026-03-09T18:42:34.688937+0000 mon.a (mon.0) 1254 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:36.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:35 vm00 bash[22468]: audit 2026-03-09T18:42:34.732163+0000 mon.a (mon.0) 1255 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:42:36.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:35 vm00 bash[22468]: audit 2026-03-09T18:42:34.733569+0000 mon.a (mon.0) 1256 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:42:36.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:35 vm00 bash[22468]: audit 2026-03-09T18:42:34.734086+0000 mon.a (mon.0) 1257 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:42:36.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:35 vm00 bash[22468]: audit 2026-03-09T18:42:34.741706+0000 mon.a (mon.0) 1258 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:36.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:35 vm00 bash[17468]: audit 2026-03-09T18:42:34.674571+0000 mon.a (mon.0) 1250 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:36.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:35 vm00 bash[17468]: audit 2026-03-09T18:42:34.680419+0000 mon.a (mon.0) 1251 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:36.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:35 vm00 bash[17468]: audit 2026-03-09T18:42:34.681538+0000 mon.a (mon.0) 1252 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:42:36.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:35 vm00 bash[17468]: audit 2026-03-09T18:42:34.682182+0000 mon.a (mon.0) 1253 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:42:36.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:35 vm00 bash[17468]: audit 2026-03-09T18:42:34.688937+0000 mon.a (mon.0) 1254 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:36.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:35 vm00 bash[17468]: audit 2026-03-09T18:42:34.732163+0000 mon.a (mon.0) 1255 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:42:36.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:35 vm00 bash[17468]: audit 2026-03-09T18:42:34.733569+0000 mon.a (mon.0) 1256 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:42:36.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:35 vm00 bash[17468]: audit 2026-03-09T18:42:34.734086+0000 mon.a (mon.0) 1257 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:42:36.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:35 vm00 bash[17468]: audit 2026-03-09T18:42:34.741706+0000 mon.a (mon.0) 1258 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:37.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:36 vm00 bash[22468]: cluster 2026-03-09T18:42:34.949229+0000 mgr.y (mgr.24991) 76 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:36 vm00 bash[17468]: cluster 2026-03-09T18:42:34.949229+0000 mgr.y (mgr.24991) 76 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:36 vm08 bash[17774]: cluster 2026-03-09T18:42:34.949229+0000 mgr.y (mgr.24991) 76 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:39.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:38 vm00 bash[22468]: cluster 2026-03-09T18:42:36.949672+0000 mgr.y (mgr.24991) 77 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:39.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:38 vm00 bash[17468]: cluster 2026-03-09T18:42:36.949672+0000 mgr.y (mgr.24991) 77 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:39.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:38 vm08 bash[17774]: cluster 2026-03-09T18:42:36.949672+0000 mgr.y (mgr.24991) 77 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:39.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:42:39 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:42:39] "GET /metrics HTTP/1.1" 200 37547 "" "Prometheus/2.51.0" 2026-03-09T18:42:41.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:40 vm00 bash[22468]: cluster 2026-03-09T18:42:38.950013+0000 mgr.y (mgr.24991) 78 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:41.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:40 vm00 bash[17468]: cluster 2026-03-09T18:42:38.950013+0000 mgr.y (mgr.24991) 78 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:41.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:40 vm08 bash[17774]: cluster 2026-03-09T18:42:38.950013+0000 mgr.y (mgr.24991) 78 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:43.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:42 vm00 bash[22468]: cluster 2026-03-09T18:42:40.950588+0000 mgr.y (mgr.24991) 79 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:43.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:42 vm00 bash[22468]: audit 2026-03-09T18:42:41.386158+0000 mgr.y (mgr.24991) 80 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:42:43.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:42 vm00 bash[17468]: cluster 2026-03-09T18:42:40.950588+0000 mgr.y (mgr.24991) 79 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:43.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:42 vm00 bash[17468]: audit 2026-03-09T18:42:41.386158+0000 mgr.y (mgr.24991) 80 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:42:43.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:42 vm08 bash[17774]: cluster 2026-03-09T18:42:40.950588+0000 mgr.y (mgr.24991) 79 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:43.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:42 vm08 bash[17774]: audit 2026-03-09T18:42:41.386158+0000 mgr.y (mgr.24991) 80 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:42:43.427 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.mgr | length == 1'"'"'' 2026-03-09T18:42:43.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:43 vm00 bash[22468]: audit 2026-03-09T18:42:43.330424+0000 mon.a (mon.0) 1259 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:42:43.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:43 vm00 bash[17468]: audit 2026-03-09T18:42:43.330424+0000 mon.a (mon.0) 1259 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:42:43.977 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:42:44.029 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.mgr | keys'"'"' | grep $sha1' 2026-03-09T18:42:44.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:43 vm08 bash[17774]: audit 2026-03-09T18:42:43.330424+0000 mon.a (mon.0) 1259 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:42:44.542 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)" 2026-03-09T18:42:44.591 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.overall | length == 2'"'"'' 2026-03-09T18:42:44.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:44 vm00 bash[22468]: cluster 2026-03-09T18:42:42.950990+0000 mgr.y (mgr.24991) 81 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:44.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:44 vm00 bash[22468]: audit 2026-03-09T18:42:43.330081+0000 mgr.y (mgr.24991) 82 : audit [DBG] from='client.25049 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:44.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:44 vm00 bash[22468]: audit 2026-03-09T18:42:43.965937+0000 mon.c (mon.1) 154 : audit [DBG] from='client.? 192.168.123.100:0/2011399205' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:44.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:44 vm00 bash[22468]: audit 2026-03-09T18:42:44.533674+0000 mon.c (mon.1) 155 : audit [DBG] from='client.? 192.168.123.100:0/1112787394' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:44.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:44 vm00 bash[17468]: cluster 2026-03-09T18:42:42.950990+0000 mgr.y (mgr.24991) 81 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:44.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:44 vm00 bash[17468]: audit 2026-03-09T18:42:43.330081+0000 mgr.y (mgr.24991) 82 : audit [DBG] from='client.25049 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:44.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:44 vm00 bash[17468]: audit 2026-03-09T18:42:43.965937+0000 mon.c (mon.1) 154 : audit [DBG] from='client.? 192.168.123.100:0/2011399205' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:44.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:44 vm00 bash[17468]: audit 2026-03-09T18:42:44.533674+0000 mon.c (mon.1) 155 : audit [DBG] from='client.? 192.168.123.100:0/1112787394' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:45.188 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:42:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:44 vm08 bash[17774]: cluster 2026-03-09T18:42:42.950990+0000 mgr.y (mgr.24991) 81 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:44 vm08 bash[17774]: audit 2026-03-09T18:42:43.330081+0000 mgr.y (mgr.24991) 82 : audit [DBG] from='client.25049 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:44 vm08 bash[17774]: audit 2026-03-09T18:42:43.965937+0000 mon.c (mon.1) 154 : audit [DBG] from='client.? 192.168.123.100:0/2011399205' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:44 vm08 bash[17774]: audit 2026-03-09T18:42:44.533674+0000 mon.c (mon.1) 155 : audit [DBG] from='client.? 192.168.123.100:0/1112787394' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:45.238 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e '"'"'.up_to_date | length == 2'"'"'' 2026-03-09T18:42:46.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:45 vm00 bash[17468]: audit 2026-03-09T18:42:45.179786+0000 mon.c (mon.1) 156 : audit [DBG] from='client.? 192.168.123.100:0/2999207227' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:46.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:45 vm00 bash[22468]: audit 2026-03-09T18:42:45.179786+0000 mon.c (mon.1) 156 : audit [DBG] from='client.? 192.168.123.100:0/2999207227' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:46.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:45 vm08 bash[17774]: audit 2026-03-09T18:42:45.179786+0000 mon.c (mon.1) 156 : audit [DBG] from='client.? 192.168.123.100:0/2999207227' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:47.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:46 vm00 bash[17468]: cluster 2026-03-09T18:42:44.951603+0000 mgr.y (mgr.24991) 83 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:47.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:46 vm00 bash[17468]: audit 2026-03-09T18:42:45.731028+0000 mgr.y (mgr.24991) 84 : audit [DBG] from='client.15312 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:47.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:46 vm00 bash[22468]: cluster 2026-03-09T18:42:44.951603+0000 mgr.y (mgr.24991) 83 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:47.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:46 vm00 bash[22468]: audit 2026-03-09T18:42:45.731028+0000 mgr.y (mgr.24991) 84 : audit [DBG] from='client.15312 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:47.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:46 vm08 bash[17774]: cluster 2026-03-09T18:42:44.951603+0000 mgr.y (mgr.24991) 83 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:47.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:46 vm08 bash[17774]: audit 2026-03-09T18:42:45.731028+0000 mgr.y (mgr.24991) 84 : audit [DBG] from='client.15312 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:47.305 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:42:47.352 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade status' 2026-03-09T18:42:47.832 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:42:47.832 INFO:teuthology.orchestra.run.vm00.stdout: "target_image": null, 2026-03-09T18:42:47.832 INFO:teuthology.orchestra.run.vm00.stdout: "in_progress": false, 2026-03-09T18:42:47.832 INFO:teuthology.orchestra.run.vm00.stdout: "which": "", 2026-03-09T18:42:47.832 INFO:teuthology.orchestra.run.vm00.stdout: "services_complete": [], 2026-03-09T18:42:47.832 INFO:teuthology.orchestra.run.vm00.stdout: "progress": null, 2026-03-09T18:42:47.832 INFO:teuthology.orchestra.run.vm00.stdout: "message": "", 2026-03-09T18:42:47.832 INFO:teuthology.orchestra.run.vm00.stdout: "is_paused": false 2026-03-09T18:42:47.832 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:42:47.900 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph health detail' 2026-03-09T18:42:48.414 INFO:teuthology.orchestra.run.vm00.stdout:HEALTH_OK 2026-03-09T18:42:48.481 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mon --hosts $(ceph orch ps | grep mgr.x | awk '"'"'{print $2}'"'"')' 2026-03-09T18:42:49.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:48 vm00 bash[22468]: cluster 2026-03-09T18:42:46.951944+0000 mgr.y (mgr.24991) 85 : cluster [DBG] pgmap v35: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:49.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:48 vm00 bash[22468]: audit 2026-03-09T18:42:47.835507+0000 mgr.y (mgr.24991) 86 : audit [DBG] from='client.15318 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:49.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:48 vm00 bash[22468]: audit 2026-03-09T18:42:48.417647+0000 mon.c (mon.1) 157 : audit [DBG] from='client.? 192.168.123.100:0/2946086389' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:42:49.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:48 vm00 bash[17468]: cluster 2026-03-09T18:42:46.951944+0000 mgr.y (mgr.24991) 85 : cluster [DBG] pgmap v35: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:49.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:48 vm00 bash[17468]: audit 2026-03-09T18:42:47.835507+0000 mgr.y (mgr.24991) 86 : audit [DBG] from='client.15318 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:49.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:48 vm00 bash[17468]: audit 2026-03-09T18:42:48.417647+0000 mon.c (mon.1) 157 : audit [DBG] from='client.? 192.168.123.100:0/2946086389' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:42:49.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:48 vm08 bash[17774]: cluster 2026-03-09T18:42:46.951944+0000 mgr.y (mgr.24991) 85 : cluster [DBG] pgmap v35: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:49.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:48 vm08 bash[17774]: audit 2026-03-09T18:42:47.835507+0000 mgr.y (mgr.24991) 86 : audit [DBG] from='client.15318 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:49.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:48 vm08 bash[17774]: audit 2026-03-09T18:42:48.417647+0000 mon.c (mon.1) 157 : audit [DBG] from='client.? 192.168.123.100:0/2946086389' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:42:49.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:42:49 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:42:49] "GET /metrics HTTP/1.1" 200 37538 "" "Prometheus/2.51.0" 2026-03-09T18:42:50.618 INFO:teuthology.orchestra.run.vm00.stdout:Initiating upgrade to quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:42:50.713 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'while ceph orch upgrade status | jq '"'"'.in_progress'"'"' | grep true && ! ceph orch upgrade status | jq '"'"'.message'"'"' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done' 2026-03-09T18:42:51.018 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:50 vm00 bash[22468]: cluster 2026-03-09T18:42:48.952294+0000 mgr.y (mgr.24991) 87 : cluster [DBG] pgmap v36: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:51.019 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:50 vm00 bash[22468]: audit 2026-03-09T18:42:48.958397+0000 mgr.y (mgr.24991) 88 : audit [DBG] from='client.15330 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:51.019 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:50 vm00 bash[22468]: audit 2026-03-09T18:42:49.182983+0000 mgr.y (mgr.24991) 89 : audit [DBG] from='client.25073 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "mon", "hosts": "vm08", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:51.019 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:50 vm00 bash[22468]: audit 2026-03-09T18:42:50.616256+0000 mon.a (mon.0) 1260 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:51.019 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:50 vm00 bash[22468]: audit 2026-03-09T18:42:50.617786+0000 mon.a (mon.0) 1261 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:42:51.019 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:50 vm00 bash[17468]: cluster 2026-03-09T18:42:48.952294+0000 mgr.y (mgr.24991) 87 : cluster [DBG] pgmap v36: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:51.019 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:50 vm00 bash[17468]: audit 2026-03-09T18:42:48.958397+0000 mgr.y (mgr.24991) 88 : audit [DBG] from='client.15330 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:51.019 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:50 vm00 bash[17468]: audit 2026-03-09T18:42:49.182983+0000 mgr.y (mgr.24991) 89 : audit [DBG] from='client.25073 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "mon", "hosts": "vm08", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:51.019 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:50 vm00 bash[17468]: audit 2026-03-09T18:42:50.616256+0000 mon.a (mon.0) 1260 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:51.019 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:50 vm00 bash[17468]: audit 2026-03-09T18:42:50.617786+0000 mon.a (mon.0) 1261 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:42:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:50 vm08 bash[17774]: cluster 2026-03-09T18:42:48.952294+0000 mgr.y (mgr.24991) 87 : cluster [DBG] pgmap v36: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:50 vm08 bash[17774]: audit 2026-03-09T18:42:48.958397+0000 mgr.y (mgr.24991) 88 : audit [DBG] from='client.15330 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:50 vm08 bash[17774]: audit 2026-03-09T18:42:49.182983+0000 mgr.y (mgr.24991) 89 : audit [DBG] from='client.25073 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "mon", "hosts": "vm08", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:51.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:50 vm08 bash[17774]: audit 2026-03-09T18:42:50.616256+0000 mon.a (mon.0) 1260 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:51.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:50 vm08 bash[17774]: audit 2026-03-09T18:42:50.617786+0000 mon.a (mon.0) 1261 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:42:51.315 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:42:51.759 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:42:51.759 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (13m) 52s ago 20m 14.5M - 0.25.0 c8568f914cd2 2a8a29ecee54 2026-03-09T18:42:51.759 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (35s) 17s ago 19m 64.9M - 10.4.0 c8b91775d855 5e0e30d27ab2 2026-03-09T18:42:51.759 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (60s) 52s ago 19m 41.4M - 3.5 e1d6a67b021e ff3da66cebe9 2026-03-09T18:42:51.759 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443,9283,8765 running (58s) 17s ago 22m 463M - 19.2.3-678-ge911bdeb 654f31e6858e e51f08afe84e 2026-03-09T18:42:51.759 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:8443,9283,8765 running (10m) 52s ago 23m 517M - 19.2.3-678-ge911bdeb 654f31e6858e 838266ced7b2 2026-03-09T18:42:51.759 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (23m) 52s ago 23m 71.7M 2048M 17.2.0 e1d6a67b021e 819e8890799a 2026-03-09T18:42:51.759 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (22m) 17s ago 22m 54.7M 2048M 17.2.0 e1d6a67b021e 5b51a6d0bbdd 2026-03-09T18:42:51.759 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (22m) 52s ago 22m 57.2M 2048M 17.2.0 e1d6a67b021e a82073bc5d9c 2026-03-09T18:42:51.759 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (13m) 52s ago 20m 7879k - 1.7.0 72c9c2088986 c2e3e3202fde 2026-03-09T18:42:51.759 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (13m) 17s ago 20m 7835k - 1.7.0 72c9c2088986 7a7d1ed8c801 2026-03-09T18:42:51.759 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (22m) 52s ago 22m 52.2M 4096M 17.2.0 e1d6a67b021e ab692a994bc3 2026-03-09T18:42:51.759 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (22m) 52s ago 22m 53.6M 4096M 17.2.0 e1d6a67b021e d8607e6df9d6 2026-03-09T18:42:51.759 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (22m) 52s ago 22m 48.8M 4096M 17.2.0 e1d6a67b021e 35e072ab4c22 2026-03-09T18:42:51.760 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (21m) 52s ago 21m 54.7M 4096M 17.2.0 e1d6a67b021e 306d680cc55b 2026-03-09T18:42:51.760 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (21m) 17s ago 21m 53.4M 4096M 17.2.0 e1d6a67b021e 781045b06a16 2026-03-09T18:42:51.760 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (21m) 17s ago 21m 52.8M 4096M 17.2.0 e1d6a67b021e 28b71e1b7c1b 2026-03-09T18:42:51.760 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (20m) 17s ago 20m 51.4M 4096M 17.2.0 e1d6a67b021e 80e1a58dd2f5 2026-03-09T18:42:51.760 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (20m) 17s ago 20m 52.0M 4096M 17.2.0 e1d6a67b021e 4f91765b51cf 2026-03-09T18:42:51.760 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (59s) 17s ago 20m 41.1M - 2.51.0 1d3b7f56885b 2dc1789f7bf0 2026-03-09T18:42:51.760 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (19m) 52s ago 19m 87.8M - 17.2.0 e1d6a67b021e 671fa80b7e00 2026-03-09T18:42:51.760 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (19m) 17s ago 19m 88.8M - 17.2.0 e1d6a67b021e 1fbcce983317 2026-03-09T18:42:52.044 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:42:52.044 INFO:teuthology.orchestra.run.vm00.stdout: "mon": { 2026-03-09T18:42:52.044 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-09T18:42:52.044 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:42:52.044 INFO:teuthology.orchestra.run.vm00.stdout: "mgr": { 2026-03-09T18:42:52.044 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:42:52.044 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:42:52.044 INFO:teuthology.orchestra.run.vm00.stdout: "osd": { 2026-03-09T18:42:52.044 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-09T18:42:52.044 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:42:52.044 INFO:teuthology.orchestra.run.vm00.stdout: "mds": {}, 2026-03-09T18:42:52.044 INFO:teuthology.orchestra.run.vm00.stdout: "rgw": { 2026-03-09T18:42:52.044 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-09T18:42:52.044 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:42:52.044 INFO:teuthology.orchestra.run.vm00.stdout: "overall": { 2026-03-09T18:42:52.044 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 13, 2026-03-09T18:42:52.044 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:42:52.044 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:42:52.044 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:42:52.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:51 vm00 bash[22468]: cephadm 2026-03-09T18:42:50.607145+0000 mgr.y (mgr.24991) 90 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:42:52.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:51 vm00 bash[22468]: audit 2026-03-09T18:42:51.019214+0000 mon.a (mon.0) 1262 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:42:52.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:51 vm00 bash[22468]: audit 2026-03-09T18:42:51.020004+0000 mon.a (mon.0) 1263 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:42:52.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:51 vm00 bash[22468]: audit 2026-03-09T18:42:51.028597+0000 mon.a (mon.0) 1264 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:52.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:51 vm00 bash[17468]: cephadm 2026-03-09T18:42:50.607145+0000 mgr.y (mgr.24991) 90 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:42:52.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:51 vm00 bash[17468]: audit 2026-03-09T18:42:51.019214+0000 mon.a (mon.0) 1262 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:42:52.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:51 vm00 bash[17468]: audit 2026-03-09T18:42:51.020004+0000 mon.a (mon.0) 1263 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:42:52.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:51 vm00 bash[17468]: audit 2026-03-09T18:42:51.028597+0000 mon.a (mon.0) 1264 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:52.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:51 vm08 bash[17774]: cephadm 2026-03-09T18:42:50.607145+0000 mgr.y (mgr.24991) 90 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:42:52.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:51 vm08 bash[17774]: audit 2026-03-09T18:42:51.019214+0000 mon.a (mon.0) 1262 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:42:52.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:51 vm08 bash[17774]: audit 2026-03-09T18:42:51.020004+0000 mon.a (mon.0) 1263 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:42:52.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:51 vm08 bash[17774]: audit 2026-03-09T18:42:51.028597+0000 mon.a (mon.0) 1264 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:52.262 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:42:52.262 INFO:teuthology.orchestra.run.vm00.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-09T18:42:52.262 INFO:teuthology.orchestra.run.vm00.stdout: "in_progress": true, 2026-03-09T18:42:52.262 INFO:teuthology.orchestra.run.vm00.stdout: "which": "Upgrading daemons of type(s) mon on host(s) vm08", 2026-03-09T18:42:52.262 INFO:teuthology.orchestra.run.vm00.stdout: "services_complete": [], 2026-03-09T18:42:52.262 INFO:teuthology.orchestra.run.vm00.stdout: "progress": "", 2026-03-09T18:42:52.262 INFO:teuthology.orchestra.run.vm00.stdout: "message": "Doing first pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df image", 2026-03-09T18:42:52.262 INFO:teuthology.orchestra.run.vm00.stdout: "is_paused": false 2026-03-09T18:42:52.262 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:42:53.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:52 vm00 bash[22468]: cluster 2026-03-09T18:42:50.952912+0000 mgr.y (mgr.24991) 91 : cluster [DBG] pgmap v37: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:53.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:52 vm00 bash[22468]: cephadm 2026-03-09T18:42:51.085010+0000 mgr.y (mgr.24991) 92 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:42:53.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:52 vm00 bash[22468]: audit 2026-03-09T18:42:51.288810+0000 mgr.y (mgr.24991) 93 : audit [DBG] from='client.25216 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:53.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:52 vm00 bash[22468]: audit 2026-03-09T18:42:51.393684+0000 mgr.y (mgr.24991) 94 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:42:53.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:52 vm00 bash[22468]: audit 2026-03-09T18:42:51.545091+0000 mgr.y (mgr.24991) 95 : audit [DBG] from='client.15345 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:53.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:52 vm00 bash[22468]: audit 2026-03-09T18:42:51.757147+0000 mgr.y (mgr.24991) 96 : audit [DBG] from='client.15351 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:53.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:52 vm00 bash[22468]: audit 2026-03-09T18:42:52.047364+0000 mon.c (mon.1) 158 : audit [DBG] from='client.? 192.168.123.100:0/2204476335' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:53.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:52 vm00 bash[22468]: audit 2026-03-09T18:42:52.768293+0000 mon.a (mon.0) 1265 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:53.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:52 vm00 bash[22468]: audit 2026-03-09T18:42:52.770187+0000 mon.a (mon.0) 1266 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:42:53.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:52 vm00 bash[22468]: audit 2026-03-09T18:42:52.771985+0000 mon.a (mon.0) 1267 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:53.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:52 vm00 bash[22468]: audit 2026-03-09T18:42:52.781939+0000 mon.a (mon.0) 1268 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:53.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:52 vm00 bash[22468]: audit 2026-03-09T18:42:52.785678+0000 mon.a (mon.0) 1269 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T18:42:53.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:52 vm00 bash[22468]: audit 2026-03-09T18:42:52.786567+0000 mon.a (mon.0) 1270 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["b"]}]: dispatch 2026-03-09T18:42:53.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:52 vm00 bash[17468]: cluster 2026-03-09T18:42:50.952912+0000 mgr.y (mgr.24991) 91 : cluster [DBG] pgmap v37: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:53.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:52 vm00 bash[17468]: cephadm 2026-03-09T18:42:51.085010+0000 mgr.y (mgr.24991) 92 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:42:53.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:52 vm00 bash[17468]: audit 2026-03-09T18:42:51.288810+0000 mgr.y (mgr.24991) 93 : audit [DBG] from='client.25216 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:53.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:52 vm00 bash[17468]: audit 2026-03-09T18:42:51.393684+0000 mgr.y (mgr.24991) 94 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:42:53.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:52 vm00 bash[17468]: audit 2026-03-09T18:42:51.545091+0000 mgr.y (mgr.24991) 95 : audit [DBG] from='client.15345 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:53.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:52 vm00 bash[17468]: audit 2026-03-09T18:42:51.757147+0000 mgr.y (mgr.24991) 96 : audit [DBG] from='client.15351 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:53.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:52 vm00 bash[17468]: audit 2026-03-09T18:42:52.047364+0000 mon.c (mon.1) 158 : audit [DBG] from='client.? 192.168.123.100:0/2204476335' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:53.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:52 vm00 bash[17468]: audit 2026-03-09T18:42:52.768293+0000 mon.a (mon.0) 1265 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:53.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:52 vm00 bash[17468]: audit 2026-03-09T18:42:52.770187+0000 mon.a (mon.0) 1266 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:42:53.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:52 vm00 bash[17468]: audit 2026-03-09T18:42:52.771985+0000 mon.a (mon.0) 1267 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:53.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:52 vm00 bash[17468]: audit 2026-03-09T18:42:52.781939+0000 mon.a (mon.0) 1268 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:53.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:52 vm00 bash[17468]: audit 2026-03-09T18:42:52.785678+0000 mon.a (mon.0) 1269 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T18:42:53.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:52 vm00 bash[17468]: audit 2026-03-09T18:42:52.786567+0000 mon.a (mon.0) 1270 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["b"]}]: dispatch 2026-03-09T18:42:53.132 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:52 vm08 bash[17774]: cluster 2026-03-09T18:42:50.952912+0000 mgr.y (mgr.24991) 91 : cluster [DBG] pgmap v37: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:53.132 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:52 vm08 bash[17774]: cephadm 2026-03-09T18:42:51.085010+0000 mgr.y (mgr.24991) 92 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:42:53.132 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:52 vm08 bash[17774]: audit 2026-03-09T18:42:51.288810+0000 mgr.y (mgr.24991) 93 : audit [DBG] from='client.25216 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:53.132 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:52 vm08 bash[17774]: audit 2026-03-09T18:42:51.393684+0000 mgr.y (mgr.24991) 94 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:42:53.132 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:52 vm08 bash[17774]: audit 2026-03-09T18:42:51.545091+0000 mgr.y (mgr.24991) 95 : audit [DBG] from='client.15345 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:53.132 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:52 vm08 bash[17774]: audit 2026-03-09T18:42:51.757147+0000 mgr.y (mgr.24991) 96 : audit [DBG] from='client.15351 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:53.132 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:52 vm08 bash[17774]: audit 2026-03-09T18:42:52.047364+0000 mon.c (mon.1) 158 : audit [DBG] from='client.? 192.168.123.100:0/2204476335' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:53.132 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:52 vm08 bash[17774]: audit 2026-03-09T18:42:52.768293+0000 mon.a (mon.0) 1265 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:53.132 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:52 vm08 bash[17774]: audit 2026-03-09T18:42:52.770187+0000 mon.a (mon.0) 1266 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:42:53.132 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:52 vm08 bash[17774]: audit 2026-03-09T18:42:52.771985+0000 mon.a (mon.0) 1267 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:42:53.132 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:52 vm08 bash[17774]: audit 2026-03-09T18:42:52.781939+0000 mon.a (mon.0) 1268 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:53.132 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:52 vm08 bash[17774]: audit 2026-03-09T18:42:52.785678+0000 mon.a (mon.0) 1269 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T18:42:53.132 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:52 vm08 bash[17774]: audit 2026-03-09T18:42:52.786567+0000 mon.a (mon.0) 1270 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["b"]}]: dispatch 2026-03-09T18:42:53.880 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:53.881 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:42:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:53.881 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:42:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:53.881 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:42:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:53.881 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:53.881 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:42:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:53.881 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:42:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:53.881 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:42:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:53.881 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:42:53 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:54.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:53 vm08 bash[17774]: audit 2026-03-09T18:42:52.265442+0000 mgr.y (mgr.24991) 97 : audit [DBG] from='client.15360 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:54.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:53 vm08 bash[17774]: cephadm 2026-03-09T18:42:52.768808+0000 mgr.y (mgr.24991) 98 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:42:54.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:53 vm08 bash[17774]: cephadm 2026-03-09T18:42:52.768842+0000 mgr.y (mgr.24991) 99 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:42:54.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:53 vm08 bash[17774]: cephadm 2026-03-09T18:42:52.772604+0000 mgr.y (mgr.24991) 100 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:42:54.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:53 vm08 bash[17774]: cephadm 2026-03-09T18:42:52.787156+0000 mgr.y (mgr.24991) 101 : cephadm [INF] Upgrade: It appears safe to stop mon.b 2026-03-09T18:42:54.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:53 vm08 bash[17774]: audit 2026-03-09T18:42:53.283514+0000 mon.a (mon.0) 1271 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:54.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:53 vm08 bash[17774]: audit 2026-03-09T18:42:53.284370+0000 mon.a (mon.0) 1272 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:42:54.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:53 vm08 bash[17774]: audit 2026-03-09T18:42:53.284833+0000 mon.a (mon.0) 1273 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:42:54.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:53 vm08 bash[17774]: audit 2026-03-09T18:42:53.285328+0000 mon.a (mon.0) 1274 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:42:54.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:53 vm08 systemd[1]: Stopping Ceph mon.b for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:42:54.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:53 vm08 bash[17774]: debug 2026-03-09T18:42:53.943+0000 7f92b1d2b700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.b -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T18:42:54.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:53 vm08 bash[17774]: debug 2026-03-09T18:42:53.943+0000 7f92b1d2b700 -1 mon.b@2(peon) e3 *** Got Signal Terminated *** 2026-03-09T18:42:54.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46009]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-mon-b 2026-03-09T18:42:54.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mon.b.service: Deactivated successfully. 2026-03-09T18:42:54.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 systemd[1]: Stopped Ceph mon.b for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:42:54.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:53 vm00 bash[22468]: audit 2026-03-09T18:42:52.265442+0000 mgr.y (mgr.24991) 97 : audit [DBG] from='client.15360 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:54.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:53 vm00 bash[22468]: cephadm 2026-03-09T18:42:52.768808+0000 mgr.y (mgr.24991) 98 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:42:54.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:53 vm00 bash[22468]: cephadm 2026-03-09T18:42:52.768842+0000 mgr.y (mgr.24991) 99 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:42:54.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:53 vm00 bash[22468]: cephadm 2026-03-09T18:42:52.772604+0000 mgr.y (mgr.24991) 100 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:42:54.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:53 vm00 bash[22468]: cephadm 2026-03-09T18:42:52.787156+0000 mgr.y (mgr.24991) 101 : cephadm [INF] Upgrade: It appears safe to stop mon.b 2026-03-09T18:42:54.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:53 vm00 bash[22468]: audit 2026-03-09T18:42:53.283514+0000 mon.a (mon.0) 1271 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:54.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:53 vm00 bash[22468]: audit 2026-03-09T18:42:53.284370+0000 mon.a (mon.0) 1272 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:42:54.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:53 vm00 bash[22468]: audit 2026-03-09T18:42:53.284833+0000 mon.a (mon.0) 1273 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:42:54.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:53 vm00 bash[22468]: audit 2026-03-09T18:42:53.285328+0000 mon.a (mon.0) 1274 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:42:54.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:53 vm00 bash[17468]: audit 2026-03-09T18:42:52.265442+0000 mgr.y (mgr.24991) 97 : audit [DBG] from='client.15360 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:42:54.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:53 vm00 bash[17468]: cephadm 2026-03-09T18:42:52.768808+0000 mgr.y (mgr.24991) 98 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:42:54.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:53 vm00 bash[17468]: cephadm 2026-03-09T18:42:52.768842+0000 mgr.y (mgr.24991) 99 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:42:54.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:53 vm00 bash[17468]: cephadm 2026-03-09T18:42:52.772604+0000 mgr.y (mgr.24991) 100 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:42:54.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:53 vm00 bash[17468]: cephadm 2026-03-09T18:42:52.787156+0000 mgr.y (mgr.24991) 101 : cephadm [INF] Upgrade: It appears safe to stop mon.b 2026-03-09T18:42:54.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:53 vm00 bash[17468]: audit 2026-03-09T18:42:53.283514+0000 mon.a (mon.0) 1271 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:54.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:53 vm00 bash[17468]: audit 2026-03-09T18:42:53.284370+0000 mon.a (mon.0) 1272 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:42:54.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:53 vm00 bash[17468]: audit 2026-03-09T18:42:53.284833+0000 mon.a (mon.0) 1273 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:42:54.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:53 vm00 bash[17468]: audit 2026-03-09T18:42:53.285328+0000 mon.a (mon.0) 1274 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 systemd[1]: Started Ceph mon.b for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.447+0000 7fedb3f2ad80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.447+0000 7fedb3f2ad80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.447+0000 7fedb3f2ad80 0 pidfile_write: ignore empty --pid-file 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.447+0000 7fedb3f2ad80 0 load: jerasure load: lrc 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: RocksDB version: 7.9.2 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Git sha 0 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: DB SUMMARY 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: DB Session ID: N6UY9TWPGBU5FZ1MRKO3 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: CURRENT file: CURRENT 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: MANIFEST file: MANIFEST-000009 size: 2063 Bytes 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-b/store.db dir, Total Num: 1, files: 000042.sst 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-b/store.db: 000040.log size: 2364357 ; 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.error_if_exists: 0 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.create_if_missing: 0 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.env: 0x556498af3dc0 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.info_log: 0x5564bbf3b7e0 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.statistics: (nil) 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.use_fsync: 0 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T18:42:54.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T18:42:54.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T18:42:54.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T18:42:54.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T18:42:54.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T18:42:54.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.db_log_dir: 2026-03-09T18:42:54.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.wal_dir: 2026-03-09T18:42:54.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T18:42:54.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T18:42:54.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T18:42:54.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T18:42:54.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T18:42:54.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T18:42:54.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T18:42:54.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T18:42:54.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.write_buffer_manager: 0x5564bbf3f900 2026-03-09T18:42:54.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T18:42:54.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T18:42:54.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T18:42:54.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T18:42:54.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T18:42:54.726 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:42:54 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:54.726 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:42:54 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:54.726 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:42:54 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:54.726 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:42:54 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:54.727 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:42:54 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:54.727 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:42:54 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:54.727 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:42:54 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.unordered_write: 0 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.row_cache: None 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.wal_filter: None 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.two_write_queues: 0 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.wal_compression: 0 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.atomic_flush: 0 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:42:54 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T18:42:54.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.max_open_files: -1 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Compression algorithms supported: 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: kZSTD supported: 0 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: kXpressCompression supported: 0 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: kZlibCompression supported: 1 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-b/store.db/MANIFEST-000009 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.merge_operator: 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.compaction_filter: None 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564bbf3a320) 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: cache_index_and_filter_blocks: 1 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: pin_top_level_index_and_filter: 1 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: index_type: 0 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: data_block_index_type: 0 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: index_shortening: 1 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: data_block_hash_table_util_ratio: 0.750000 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: checksum: 4 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: no_block_cache: 0 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: block_cache: 0x5564bbf61350 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: block_cache_name: BinnedLRUCache 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: block_cache_options: 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: capacity : 536870912 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: num_shard_bits : 4 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: strict_capacity_limit : 0 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: high_pri_pool_ratio: 0.000 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: block_cache_compressed: (nil) 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: persistent_cache: (nil) 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: block_size: 4096 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: block_size_deviation: 10 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: block_restart_interval: 16 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: index_block_restart_interval: 1 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: metadata_block_size: 4096 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: partition_filters: 0 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: use_delta_encoding: 1 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: filter_policy: bloomfilter 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: whole_key_filtering: 1 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: verify_compression: 0 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: read_amp_bytes_per_bit: 0 2026-03-09T18:42:54.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: format_version: 5 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: enable_index_compression: 1 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: block_align: 0 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: max_auto_readahead_size: 262144 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: prepopulate_block_cache: 0 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: initial_auto_readahead_size: 8192 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: num_file_reads_for_auto_readahead: 2 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.compression: NoCompression 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.num_levels: 7 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T18:42:54.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.bloom_locality: 0 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.ttl: 2592000 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.enable_blob_files: false 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.min_blob_size: 0 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 3 rocksdb: [table/block_based/block_based_table_reader.cc:721] At least one SST file opened without unique ID to verify: 42.sst 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-b/store.db/MANIFEST-000009 succeeded,manifest_file_number is 9, next_file_number is 44, last_sequence is 23540, log_number is 40,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 40 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 711aa06e-aa52-4f38-9afc-7bd63241c2e3 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773081774454528, "job": 1, "event": "recovery_started", "wal_files": [40]} 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.451+0000 7fedb3f2ad80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #40 mode 2 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.459+0000 7fedb3f2ad80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773081774463118, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 45, "file_size": 1454483, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23545, "largest_seqno": 24824, "table_properties": {"data_size": 1449087, "index_size": 3016, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 13592, "raw_average_key_size": 25, "raw_value_size": 1437692, "raw_average_value_size": 2712, "num_data_blocks": 136, "num_entries": 530, "num_filter_entries": 530, "num_deletions": 9, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773081774, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "711aa06e-aa52-4f38-9afc-7bd63241c2e3", "db_session_id": "N6UY9TWPGBU5FZ1MRKO3", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}} 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.459+0000 7fedb3f2ad80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773081774463237, "job": 1, "event": "recovery_finished"} 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.459+0000 7fedb3f2ad80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 47 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.459+0000 7fedb3f2ad80 4 rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.463+0000 7fedb3f2ad80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-b/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.463+0000 7fedb3f2ad80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5564bbf62e00 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.463+0000 7fedb3f2ad80 4 rocksdb: DB pointer 0x5564bc06e000 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.463+0000 7fedb3f2ad80 0 starting mon.b rank 2 at public addrs [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] at bind addrs [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon_data /var/lib/ceph/mon/ceph-b fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.463+0000 7fedb3f2ad80 1 mon.b@-1(???) e3 preinit fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.463+0000 7fedb3f2ad80 0 mon.b@-1(???).mds e1 new map 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.463+0000 7fedb3f2ad80 0 mon.b@-1(???).mds e1 print_map 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: e1 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: btime 1970-01-01T00:00:00:000000+0000 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: legacy client fscid: -1 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: No filesystems configured 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.463+0000 7fedb3f2ad80 0 mon.b@-1(???).osd e100 crush map has features 3314933000854323200, adjusting msgr requires 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.463+0000 7fedb3f2ad80 0 mon.b@-1(???).osd e100 crush map has features 432629239337189376, adjusting msgr requires 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.463+0000 7fedb3f2ad80 0 mon.b@-1(???).osd e100 crush map has features 432629239337189376, adjusting msgr requires 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.463+0000 7fedb3f2ad80 0 mon.b@-1(???).osd e100 crush map has features 432629239337189376, adjusting msgr requires 2026-03-09T18:42:54.730 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:54 vm08 bash[46122]: debug 2026-03-09T18:42:54.467+0000 7fedb3f2ad80 1 mon.b@-1(???).paxosservice(auth 1..23) refresh upgraded, format 0 -> 3 2026-03-09T18:42:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:55 vm08 bash[46122]: cluster 2026-03-09T18:42:54.676012+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T18:42:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:55 vm08 bash[46122]: cluster 2026-03-09T18:42:54.676012+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T18:42:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:55 vm08 bash[46122]: cluster 2026-03-09T18:42:54.681549+0000 mon.a (mon.0) 1275 : cluster [INF] mon.a calling monitor election 2026-03-09T18:42:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:55 vm08 bash[46122]: cluster 2026-03-09T18:42:54.681549+0000 mon.a (mon.0) 1275 : cluster [INF] mon.a calling monitor election 2026-03-09T18:42:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:55 vm08 bash[46122]: cluster 2026-03-09T18:42:54.681675+0000 mon.c (mon.1) 159 : cluster [INF] mon.c calling monitor election 2026-03-09T18:42:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:55 vm08 bash[46122]: cluster 2026-03-09T18:42:54.681675+0000 mon.c (mon.1) 159 : cluster [INF] mon.c calling monitor election 2026-03-09T18:42:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:55 vm08 bash[46122]: cluster 2026-03-09T18:42:54.684609+0000 mon.a (mon.0) 1276 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T18:42:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:55 vm08 bash[46122]: cluster 2026-03-09T18:42:54.684609+0000 mon.a (mon.0) 1276 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T18:42:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:55 vm08 bash[46122]: cluster 2026-03-09T18:42:54.691254+0000 mon.a (mon.0) 1277 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0],b=[v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0],c=[v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0]} 2026-03-09T18:42:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:55 vm08 bash[46122]: cluster 2026-03-09T18:42:54.691254+0000 mon.a (mon.0) 1277 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0],b=[v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0],c=[v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0]} 2026-03-09T18:42:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:55 vm08 bash[46122]: cluster 2026-03-09T18:42:54.691390+0000 mon.a (mon.0) 1278 : cluster [DBG] fsmap 2026-03-09T18:42:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:55 vm08 bash[46122]: cluster 2026-03-09T18:42:54.691390+0000 mon.a (mon.0) 1278 : cluster [DBG] fsmap 2026-03-09T18:42:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:55 vm08 bash[46122]: cluster 2026-03-09T18:42:54.691496+0000 mon.a (mon.0) 1279 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T18:42:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:55 vm08 bash[46122]: cluster 2026-03-09T18:42:54.691496+0000 mon.a (mon.0) 1279 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T18:42:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:55 vm08 bash[46122]: cluster 2026-03-09T18:42:54.696330+0000 mon.a (mon.0) 1280 : cluster [DBG] mgrmap e42: y(active, since 72s), standbys: x 2026-03-09T18:42:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:55 vm08 bash[46122]: cluster 2026-03-09T18:42:54.696330+0000 mon.a (mon.0) 1280 : cluster [DBG] mgrmap e42: y(active, since 72s), standbys: x 2026-03-09T18:42:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:55 vm08 bash[46122]: cluster 2026-03-09T18:42:54.700899+0000 mon.a (mon.0) 1281 : cluster [INF] overall HEALTH_OK 2026-03-09T18:42:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:55 vm08 bash[46122]: cluster 2026-03-09T18:42:54.700899+0000 mon.a (mon.0) 1281 : cluster [INF] overall HEALTH_OK 2026-03-09T18:42:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:55 vm08 bash[46122]: audit 2026-03-09T18:42:54.704732+0000 mon.a (mon.0) 1282 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:55 vm08 bash[46122]: audit 2026-03-09T18:42:54.704732+0000 mon.a (mon.0) 1282 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:55 vm08 bash[46122]: audit 2026-03-09T18:42:54.709563+0000 mon.a (mon.0) 1283 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:55 vm08 bash[46122]: audit 2026-03-09T18:42:54.709563+0000 mon.a (mon.0) 1283 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:55 vm08 bash[46122]: audit 2026-03-09T18:42:54.710397+0000 mon.a (mon.0) 1284 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:42:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:55 vm08 bash[46122]: audit 2026-03-09T18:42:54.710397+0000 mon.a (mon.0) 1284 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:42:56.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:55 vm00 bash[22468]: cluster 2026-03-09T18:42:54.676012+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T18:42:56.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:55 vm00 bash[22468]: cluster 2026-03-09T18:42:54.681549+0000 mon.a (mon.0) 1275 : cluster [INF] mon.a calling monitor election 2026-03-09T18:42:56.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:55 vm00 bash[22468]: cluster 2026-03-09T18:42:54.681675+0000 mon.c (mon.1) 159 : cluster [INF] mon.c calling monitor election 2026-03-09T18:42:56.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:55 vm00 bash[22468]: cluster 2026-03-09T18:42:54.684609+0000 mon.a (mon.0) 1276 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T18:42:56.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:55 vm00 bash[22468]: cluster 2026-03-09T18:42:54.691254+0000 mon.a (mon.0) 1277 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0],b=[v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0],c=[v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0]} 2026-03-09T18:42:56.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:55 vm00 bash[22468]: cluster 2026-03-09T18:42:54.691390+0000 mon.a (mon.0) 1278 : cluster [DBG] fsmap 2026-03-09T18:42:56.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:55 vm00 bash[22468]: cluster 2026-03-09T18:42:54.691496+0000 mon.a (mon.0) 1279 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T18:42:56.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:55 vm00 bash[22468]: cluster 2026-03-09T18:42:54.696330+0000 mon.a (mon.0) 1280 : cluster [DBG] mgrmap e42: y(active, since 72s), standbys: x 2026-03-09T18:42:56.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:55 vm00 bash[22468]: cluster 2026-03-09T18:42:54.700899+0000 mon.a (mon.0) 1281 : cluster [INF] overall HEALTH_OK 2026-03-09T18:42:56.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:55 vm00 bash[22468]: audit 2026-03-09T18:42:54.704732+0000 mon.a (mon.0) 1282 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:56.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:55 vm00 bash[22468]: audit 2026-03-09T18:42:54.709563+0000 mon.a (mon.0) 1283 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:56.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:55 vm00 bash[22468]: audit 2026-03-09T18:42:54.710397+0000 mon.a (mon.0) 1284 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:42:56.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:55 vm00 bash[17468]: cluster 2026-03-09T18:42:54.676012+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T18:42:56.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:55 vm00 bash[17468]: cluster 2026-03-09T18:42:54.681549+0000 mon.a (mon.0) 1275 : cluster [INF] mon.a calling monitor election 2026-03-09T18:42:56.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:55 vm00 bash[17468]: cluster 2026-03-09T18:42:54.681675+0000 mon.c (mon.1) 159 : cluster [INF] mon.c calling monitor election 2026-03-09T18:42:56.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:55 vm00 bash[17468]: cluster 2026-03-09T18:42:54.684609+0000 mon.a (mon.0) 1276 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T18:42:56.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:55 vm00 bash[17468]: cluster 2026-03-09T18:42:54.691254+0000 mon.a (mon.0) 1277 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0],b=[v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0],c=[v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0]} 2026-03-09T18:42:56.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:55 vm00 bash[17468]: cluster 2026-03-09T18:42:54.691390+0000 mon.a (mon.0) 1278 : cluster [DBG] fsmap 2026-03-09T18:42:56.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:55 vm00 bash[17468]: cluster 2026-03-09T18:42:54.691496+0000 mon.a (mon.0) 1279 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T18:42:56.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:55 vm00 bash[17468]: cluster 2026-03-09T18:42:54.696330+0000 mon.a (mon.0) 1280 : cluster [DBG] mgrmap e42: y(active, since 72s), standbys: x 2026-03-09T18:42:56.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:55 vm00 bash[17468]: cluster 2026-03-09T18:42:54.700899+0000 mon.a (mon.0) 1281 : cluster [INF] overall HEALTH_OK 2026-03-09T18:42:56.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:55 vm00 bash[17468]: audit 2026-03-09T18:42:54.704732+0000 mon.a (mon.0) 1282 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:56.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:55 vm00 bash[17468]: audit 2026-03-09T18:42:54.709563+0000 mon.a (mon.0) 1283 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:42:56.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:55 vm00 bash[17468]: audit 2026-03-09T18:42:54.710397+0000 mon.a (mon.0) 1284 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:42:57.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:56 vm00 bash[22468]: cluster 2026-03-09T18:42:54.953857+0000 mgr.y (mgr.24991) 105 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:57.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:56 vm00 bash[17468]: cluster 2026-03-09T18:42:54.953857+0000 mgr.y (mgr.24991) 105 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:57.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:56 vm08 bash[46122]: cluster 2026-03-09T18:42:54.953857+0000 mgr.y (mgr.24991) 105 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:57.233 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:56 vm08 bash[46122]: cluster 2026-03-09T18:42:54.953857+0000 mgr.y (mgr.24991) 105 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:42:59.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:58 vm00 bash[22468]: cluster 2026-03-09T18:42:56.954253+0000 mgr.y (mgr.24991) 106 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:59.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:42:58 vm00 bash[22468]: audit 2026-03-09T18:42:58.331584+0000 mon.a (mon.0) 1285 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:42:59.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:58 vm00 bash[17468]: cluster 2026-03-09T18:42:56.954253+0000 mgr.y (mgr.24991) 106 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:59.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:42:58 vm00 bash[17468]: audit 2026-03-09T18:42:58.331584+0000 mon.a (mon.0) 1285 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:42:59.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:58 vm08 bash[46122]: cluster 2026-03-09T18:42:56.954253+0000 mgr.y (mgr.24991) 106 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:59.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:58 vm08 bash[46122]: cluster 2026-03-09T18:42:56.954253+0000 mgr.y (mgr.24991) 106 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:42:59.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:58 vm08 bash[46122]: audit 2026-03-09T18:42:58.331584+0000 mon.a (mon.0) 1285 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:42:59.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:42:58 vm08 bash[46122]: audit 2026-03-09T18:42:58.331584+0000 mon.a (mon.0) 1285 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:42:59.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:42:59 vm00 bash[53976]: debug 2026-03-09T18:42:59.475+0000 7f7a3ff17640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-03-09T18:42:59.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:42:59 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:42:59] "GET /metrics HTTP/1.1" 200 37487 "" "Prometheus/2.51.0" 2026-03-09T18:43:01.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:00 vm00 bash[22468]: cluster 2026-03-09T18:42:58.954602+0000 mgr.y (mgr.24991) 107 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:01.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:00 vm00 bash[22468]: audit 2026-03-09T18:43:00.152255+0000 mon.a (mon.0) 1286 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:01.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:00 vm00 bash[22468]: audit 2026-03-09T18:43:00.157648+0000 mon.a (mon.0) 1287 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:01.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:00 vm00 bash[17468]: cluster 2026-03-09T18:42:58.954602+0000 mgr.y (mgr.24991) 107 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:01.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:00 vm00 bash[17468]: audit 2026-03-09T18:43:00.152255+0000 mon.a (mon.0) 1286 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:01.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:00 vm00 bash[17468]: audit 2026-03-09T18:43:00.157648+0000 mon.a (mon.0) 1287 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:01.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:00 vm08 bash[46122]: cluster 2026-03-09T18:42:58.954602+0000 mgr.y (mgr.24991) 107 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:01.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:00 vm08 bash[46122]: cluster 2026-03-09T18:42:58.954602+0000 mgr.y (mgr.24991) 107 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:01.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:00 vm08 bash[46122]: audit 2026-03-09T18:43:00.152255+0000 mon.a (mon.0) 1286 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:01.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:00 vm08 bash[46122]: audit 2026-03-09T18:43:00.152255+0000 mon.a (mon.0) 1286 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:01.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:00 vm08 bash[46122]: audit 2026-03-09T18:43:00.157648+0000 mon.a (mon.0) 1287 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:01.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:00 vm08 bash[46122]: audit 2026-03-09T18:43:00.157648+0000 mon.a (mon.0) 1287 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:02.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:01 vm00 bash[22468]: audit 2026-03-09T18:43:00.789075+0000 mon.a (mon.0) 1288 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:02.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:01 vm00 bash[22468]: audit 2026-03-09T18:43:00.795143+0000 mon.a (mon.0) 1289 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:02.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:01 vm00 bash[17468]: audit 2026-03-09T18:43:00.789075+0000 mon.a (mon.0) 1288 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:02.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:01 vm00 bash[17468]: audit 2026-03-09T18:43:00.795143+0000 mon.a (mon.0) 1289 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:02.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:01 vm08 bash[46122]: audit 2026-03-09T18:43:00.789075+0000 mon.a (mon.0) 1288 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:02.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:01 vm08 bash[46122]: audit 2026-03-09T18:43:00.789075+0000 mon.a (mon.0) 1288 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:02.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:01 vm08 bash[46122]: audit 2026-03-09T18:43:00.795143+0000 mon.a (mon.0) 1289 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:02.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:01 vm08 bash[46122]: audit 2026-03-09T18:43:00.795143+0000 mon.a (mon.0) 1289 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:03.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:02 vm00 bash[22468]: cluster 2026-03-09T18:43:00.955294+0000 mgr.y (mgr.24991) 108 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:43:03.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:02 vm00 bash[22468]: audit 2026-03-09T18:43:01.402019+0000 mgr.y (mgr.24991) 109 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:43:03.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:02 vm00 bash[17468]: cluster 2026-03-09T18:43:00.955294+0000 mgr.y (mgr.24991) 108 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:43:03.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:02 vm00 bash[17468]: audit 2026-03-09T18:43:01.402019+0000 mgr.y (mgr.24991) 109 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:43:03.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:02 vm08 bash[46122]: cluster 2026-03-09T18:43:00.955294+0000 mgr.y (mgr.24991) 108 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:43:03.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:02 vm08 bash[46122]: cluster 2026-03-09T18:43:00.955294+0000 mgr.y (mgr.24991) 108 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:43:03.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:02 vm08 bash[46122]: audit 2026-03-09T18:43:01.402019+0000 mgr.y (mgr.24991) 109 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:43:03.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:02 vm08 bash[46122]: audit 2026-03-09T18:43:01.402019+0000 mgr.y (mgr.24991) 109 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:43:05.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:04 vm00 bash[22468]: cluster 2026-03-09T18:43:02.955686+0000 mgr.y (mgr.24991) 110 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:05.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:04 vm00 bash[17468]: cluster 2026-03-09T18:43:02.955686+0000 mgr.y (mgr.24991) 110 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:05.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:04 vm08 bash[46122]: cluster 2026-03-09T18:43:02.955686+0000 mgr.y (mgr.24991) 110 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:05.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:04 vm08 bash[46122]: cluster 2026-03-09T18:43:02.955686+0000 mgr.y (mgr.24991) 110 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: cluster 2026-03-09T18:43:04.956265+0000 mgr.y (mgr.24991) 111 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:43:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.542803+0000 mon.a (mon.0) 1290 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.549899+0000 mon.a (mon.0) 1291 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.550750+0000 mon.a (mon.0) 1292 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:43:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.551464+0000 mon.a (mon.0) 1293 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:43:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.558144+0000 mon.a (mon.0) 1294 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.604024+0000 mon.a (mon.0) 1295 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.605294+0000 mon.a (mon.0) 1296 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.606376+0000 mon.a (mon.0) 1297 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.613397+0000 mon.a (mon.0) 1298 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.617962+0000 mon.a (mon.0) 1299 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.627387+0000 mon.a (mon.0) 1300 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.628471+0000 mon.a (mon.0) 1301 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.635535+0000 mon.a (mon.0) 1302 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.636386+0000 mon.a (mon.0) 1303 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.661754+0000 mon.a (mon.0) 1304 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.662629+0000 mon.a (mon.0) 1305 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.669261+0000 mon.a (mon.0) 1306 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]': finished 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.670192+0000 mon.a (mon.0) 1307 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.676387+0000 mon.a (mon.0) 1308 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.677422+0000 mon.a (mon.0) 1309 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.684239+0000 mon.a (mon.0) 1310 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.685484+0000 mon.a (mon.0) 1311 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.691443+0000 mon.a (mon.0) 1312 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.692491+0000 mon.a (mon.0) 1313 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.693571+0000 mon.a (mon.0) 1314 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.694333+0000 mon.a (mon.0) 1315 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.695343+0000 mon.a (mon.0) 1316 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.697047+0000 mon.a (mon.0) 1317 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.698134+0000 mon.a (mon.0) 1318 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.698979+0000 mon.a (mon.0) 1319 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.699828+0000 mon.a (mon.0) 1320 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.700354+0000 mon.a (mon.0) 1321 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.701126+0000 mon.a (mon.0) 1322 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.701691+0000 mon.a (mon.0) 1323 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.702488+0000 mon.a (mon.0) 1324 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.703241+0000 mon.a (mon.0) 1325 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.707970+0000 mon.a (mon.0) 1326 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.708558+0000 mon.a (mon.0) 1327 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.713714+0000 mon.a (mon.0) 1328 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.714084+0000 mon.a (mon.0) 1329 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.719516+0000 mon.a (mon.0) 1330 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.719905+0000 mon.a (mon.0) 1331 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.720262+0000 mon.a (mon.0) 1332 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.724200+0000 mon.a (mon.0) 1333 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.724564+0000 mon.a (mon.0) 1334 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.724923+0000 mon.a (mon.0) 1335 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.728792+0000 mon.a (mon.0) 1336 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.730213+0000 mon.a (mon.0) 1337 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.730595+0000 mon.a (mon.0) 1338 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.736858+0000 mon.a (mon.0) 1339 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.737354+0000 mon.a (mon.0) 1340 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.737755+0000 mon.a (mon.0) 1341 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.742115+0000 mon.a (mon.0) 1342 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.742720+0000 mon.a (mon.0) 1343 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: cluster 2026-03-09T18:43:04.956265+0000 mgr.y (mgr.24991) 111 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.542803+0000 mon.a (mon.0) 1290 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.549899+0000 mon.a (mon.0) 1291 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.550750+0000 mon.a (mon.0) 1292 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.551464+0000 mon.a (mon.0) 1293 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.558144+0000 mon.a (mon.0) 1294 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.604024+0000 mon.a (mon.0) 1295 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.605294+0000 mon.a (mon.0) 1296 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.606376+0000 mon.a (mon.0) 1297 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.613397+0000 mon.a (mon.0) 1298 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.617962+0000 mon.a (mon.0) 1299 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.627387+0000 mon.a (mon.0) 1300 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.628471+0000 mon.a (mon.0) 1301 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.635535+0000 mon.a (mon.0) 1302 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.636386+0000 mon.a (mon.0) 1303 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.661754+0000 mon.a (mon.0) 1304 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.662629+0000 mon.a (mon.0) 1305 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.669261+0000 mon.a (mon.0) 1306 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]': finished 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.670192+0000 mon.a (mon.0) 1307 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.676387+0000 mon.a (mon.0) 1308 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.677422+0000 mon.a (mon.0) 1309 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.684239+0000 mon.a (mon.0) 1310 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.685484+0000 mon.a (mon.0) 1311 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.691443+0000 mon.a (mon.0) 1312 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.692491+0000 mon.a (mon.0) 1313 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.693571+0000 mon.a (mon.0) 1314 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.694333+0000 mon.a (mon.0) 1315 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.695343+0000 mon.a (mon.0) 1316 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.697047+0000 mon.a (mon.0) 1317 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.698134+0000 mon.a (mon.0) 1318 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.698979+0000 mon.a (mon.0) 1319 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.699828+0000 mon.a (mon.0) 1320 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.700354+0000 mon.a (mon.0) 1321 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.701126+0000 mon.a (mon.0) 1322 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.701691+0000 mon.a (mon.0) 1323 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.702488+0000 mon.a (mon.0) 1324 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.703241+0000 mon.a (mon.0) 1325 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.707970+0000 mon.a (mon.0) 1326 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.708558+0000 mon.a (mon.0) 1327 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.713714+0000 mon.a (mon.0) 1328 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:43:07.131 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.714084+0000 mon.a (mon.0) 1329 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.719516+0000 mon.a (mon.0) 1330 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.719905+0000 mon.a (mon.0) 1331 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.720262+0000 mon.a (mon.0) 1332 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.724200+0000 mon.a (mon.0) 1333 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.724564+0000 mon.a (mon.0) 1334 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.724923+0000 mon.a (mon.0) 1335 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.728792+0000 mon.a (mon.0) 1336 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.730213+0000 mon.a (mon.0) 1337 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.730595+0000 mon.a (mon.0) 1338 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.736858+0000 mon.a (mon.0) 1339 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.737354+0000 mon.a (mon.0) 1340 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.737755+0000 mon.a (mon.0) 1341 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.742115+0000 mon.a (mon.0) 1342 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.742720+0000 mon.a (mon.0) 1343 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.748150+0000 mon.a (mon.0) 1344 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.748669+0000 mon.a (mon.0) 1345 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.749025+0000 mon.a (mon.0) 1346 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.749413+0000 mon.a (mon.0) 1347 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.749895+0000 mon.a (mon.0) 1348 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.750406+0000 mon.a (mon.0) 1349 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.750894+0000 mon.a (mon.0) 1350 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.751702+0000 mon.a (mon.0) 1351 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.755041+0000 mon.a (mon.0) 1352 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.755426+0000 mon.a (mon.0) 1353 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.756449+0000 mon.a (mon.0) 1354 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.756910+0000 mon.a (mon.0) 1355 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.760834+0000 mon.a (mon.0) 1356 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.806093+0000 mon.a (mon.0) 1357 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.807275+0000 mon.a (mon.0) 1358 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:06 vm00 bash[22468]: audit 2026-03-09T18:43:06.807773+0000 mon.a (mon.0) 1359 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.748150+0000 mon.a (mon.0) 1344 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.748669+0000 mon.a (mon.0) 1345 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.749025+0000 mon.a (mon.0) 1346 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.749413+0000 mon.a (mon.0) 1347 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.749895+0000 mon.a (mon.0) 1348 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.750406+0000 mon.a (mon.0) 1349 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.750894+0000 mon.a (mon.0) 1350 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.751702+0000 mon.a (mon.0) 1351 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.755041+0000 mon.a (mon.0) 1352 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.755426+0000 mon.a (mon.0) 1353 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.756449+0000 mon.a (mon.0) 1354 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.756910+0000 mon.a (mon.0) 1355 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.760834+0000 mon.a (mon.0) 1356 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.806093+0000 mon.a (mon.0) 1357 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.807275+0000 mon.a (mon.0) 1358 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:43:07.132 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:06 vm00 bash[17468]: audit 2026-03-09T18:43:06.807773+0000 mon.a (mon.0) 1359 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:43:07.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: cluster 2026-03-09T18:43:04.956265+0000 mgr.y (mgr.24991) 111 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:43:07.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: cluster 2026-03-09T18:43:04.956265+0000 mgr.y (mgr.24991) 111 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:43:07.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.542803+0000 mon.a (mon.0) 1290 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.542803+0000 mon.a (mon.0) 1290 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.549899+0000 mon.a (mon.0) 1291 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.549899+0000 mon.a (mon.0) 1291 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.550750+0000 mon.a (mon.0) 1292 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:43:07.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.550750+0000 mon.a (mon.0) 1292 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:43:07.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.551464+0000 mon.a (mon.0) 1293 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:43:07.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.551464+0000 mon.a (mon.0) 1293 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:43:07.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.558144+0000 mon.a (mon.0) 1294 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.558144+0000 mon.a (mon.0) 1294 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.604024+0000 mon.a (mon.0) 1295 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:07.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.604024+0000 mon.a (mon.0) 1295 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:07.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.605294+0000 mon.a (mon.0) 1296 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.605294+0000 mon.a (mon.0) 1296 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.606376+0000 mon.a (mon.0) 1297 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.606376+0000 mon.a (mon.0) 1297 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.613397+0000 mon.a (mon.0) 1298 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.613397+0000 mon.a (mon.0) 1298 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.617962+0000 mon.a (mon.0) 1299 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.617962+0000 mon.a (mon.0) 1299 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.627387+0000 mon.a (mon.0) 1300 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.627387+0000 mon.a (mon.0) 1300 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.628471+0000 mon.a (mon.0) 1301 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.628471+0000 mon.a (mon.0) 1301 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.635535+0000 mon.a (mon.0) 1302 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.635535+0000 mon.a (mon.0) 1302 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.636386+0000 mon.a (mon.0) 1303 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.636386+0000 mon.a (mon.0) 1303 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.661754+0000 mon.a (mon.0) 1304 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.661754+0000 mon.a (mon.0) 1304 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.662629+0000 mon.a (mon.0) 1305 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.662629+0000 mon.a (mon.0) 1305 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.669261+0000 mon.a (mon.0) 1306 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]': finished 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.669261+0000 mon.a (mon.0) 1306 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]': finished 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.670192+0000 mon.a (mon.0) 1307 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.670192+0000 mon.a (mon.0) 1307 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.676387+0000 mon.a (mon.0) 1308 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.676387+0000 mon.a (mon.0) 1308 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.677422+0000 mon.a (mon.0) 1309 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.677422+0000 mon.a (mon.0) 1309 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.684239+0000 mon.a (mon.0) 1310 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.684239+0000 mon.a (mon.0) 1310 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.685484+0000 mon.a (mon.0) 1311 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.685484+0000 mon.a (mon.0) 1311 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.691443+0000 mon.a (mon.0) 1312 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.691443+0000 mon.a (mon.0) 1312 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.692491+0000 mon.a (mon.0) 1313 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.692491+0000 mon.a (mon.0) 1313 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.693571+0000 mon.a (mon.0) 1314 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.693571+0000 mon.a (mon.0) 1314 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.694333+0000 mon.a (mon.0) 1315 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.694333+0000 mon.a (mon.0) 1315 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.695343+0000 mon.a (mon.0) 1316 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.695343+0000 mon.a (mon.0) 1316 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.697047+0000 mon.a (mon.0) 1317 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.697047+0000 mon.a (mon.0) 1317 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.698134+0000 mon.a (mon.0) 1318 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.698134+0000 mon.a (mon.0) 1318 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.698979+0000 mon.a (mon.0) 1319 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.698979+0000 mon.a (mon.0) 1319 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.699828+0000 mon.a (mon.0) 1320 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.699828+0000 mon.a (mon.0) 1320 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.700354+0000 mon.a (mon.0) 1321 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.700354+0000 mon.a (mon.0) 1321 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.701126+0000 mon.a (mon.0) 1322 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.701126+0000 mon.a (mon.0) 1322 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.701691+0000 mon.a (mon.0) 1323 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.701691+0000 mon.a (mon.0) 1323 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.702488+0000 mon.a (mon.0) 1324 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.702488+0000 mon.a (mon.0) 1324 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.703241+0000 mon.a (mon.0) 1325 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.703241+0000 mon.a (mon.0) 1325 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.707970+0000 mon.a (mon.0) 1326 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.707970+0000 mon.a (mon.0) 1326 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.708558+0000 mon.a (mon.0) 1327 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.708558+0000 mon.a (mon.0) 1327 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.713714+0000 mon.a (mon.0) 1328 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.713714+0000 mon.a (mon.0) 1328 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:43:07.227 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.714084+0000 mon.a (mon.0) 1329 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.714084+0000 mon.a (mon.0) 1329 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.719516+0000 mon.a (mon.0) 1330 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.719516+0000 mon.a (mon.0) 1330 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.719905+0000 mon.a (mon.0) 1331 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.719905+0000 mon.a (mon.0) 1331 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.720262+0000 mon.a (mon.0) 1332 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.720262+0000 mon.a (mon.0) 1332 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.724200+0000 mon.a (mon.0) 1333 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.724200+0000 mon.a (mon.0) 1333 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.724564+0000 mon.a (mon.0) 1334 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.724564+0000 mon.a (mon.0) 1334 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.724923+0000 mon.a (mon.0) 1335 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.724923+0000 mon.a (mon.0) 1335 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.728792+0000 mon.a (mon.0) 1336 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.728792+0000 mon.a (mon.0) 1336 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.730213+0000 mon.a (mon.0) 1337 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.730213+0000 mon.a (mon.0) 1337 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.730595+0000 mon.a (mon.0) 1338 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.730595+0000 mon.a (mon.0) 1338 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.736858+0000 mon.a (mon.0) 1339 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.736858+0000 mon.a (mon.0) 1339 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.737354+0000 mon.a (mon.0) 1340 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.737354+0000 mon.a (mon.0) 1340 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.737755+0000 mon.a (mon.0) 1341 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.737755+0000 mon.a (mon.0) 1341 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.742115+0000 mon.a (mon.0) 1342 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.742115+0000 mon.a (mon.0) 1342 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.742720+0000 mon.a (mon.0) 1343 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.742720+0000 mon.a (mon.0) 1343 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.748150+0000 mon.a (mon.0) 1344 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.748150+0000 mon.a (mon.0) 1344 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.748669+0000 mon.a (mon.0) 1345 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.748669+0000 mon.a (mon.0) 1345 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.749025+0000 mon.a (mon.0) 1346 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.749025+0000 mon.a (mon.0) 1346 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.749413+0000 mon.a (mon.0) 1347 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.749413+0000 mon.a (mon.0) 1347 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.749895+0000 mon.a (mon.0) 1348 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.749895+0000 mon.a (mon.0) 1348 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.750406+0000 mon.a (mon.0) 1349 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.750406+0000 mon.a (mon.0) 1349 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.750894+0000 mon.a (mon.0) 1350 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.750894+0000 mon.a (mon.0) 1350 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.751702+0000 mon.a (mon.0) 1351 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.751702+0000 mon.a (mon.0) 1351 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.755041+0000 mon.a (mon.0) 1352 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.755041+0000 mon.a (mon.0) 1352 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:43:07.228 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.755426+0000 mon.a (mon.0) 1353 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:07.229 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.755426+0000 mon.a (mon.0) 1353 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:07.229 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.756449+0000 mon.a (mon.0) 1354 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:43:07.229 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.756449+0000 mon.a (mon.0) 1354 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:43:07.229 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.756910+0000 mon.a (mon.0) 1355 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:43:07.229 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.756910+0000 mon.a (mon.0) 1355 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:43:07.229 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.760834+0000 mon.a (mon.0) 1356 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.229 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.760834+0000 mon.a (mon.0) 1356 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:07.229 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.806093+0000 mon.a (mon.0) 1357 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:07.229 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.806093+0000 mon.a (mon.0) 1357 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:07.229 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.807275+0000 mon.a (mon.0) 1358 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:43:07.229 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.807275+0000 mon.a (mon.0) 1358 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:43:07.229 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.807773+0000 mon.a (mon.0) 1359 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:43:07.229 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:06 vm08 bash[46122]: audit 2026-03-09T18:43:06.807773+0000 mon.a (mon.0) 1359 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:07 vm00 bash[22468]: cephadm 2026-03-09T18:43:06.534884+0000 mgr.y (mgr.24991) 112 : cephadm [INF] Detected new or changed devices on vm08 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:07 vm00 bash[22468]: cephadm 2026-03-09T18:43:06.606855+0000 mgr.y (mgr.24991) 113 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:07 vm00 bash[22468]: cephadm 2026-03-09T18:43:06.618401+0000 mgr.y (mgr.24991) 114 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:07 vm00 bash[22468]: cephadm 2026-03-09T18:43:06.628950+0000 mgr.y (mgr.24991) 115 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:07 vm00 bash[22468]: cephadm 2026-03-09T18:43:06.636816+0000 mgr.y (mgr.24991) 116 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:07 vm00 bash[22468]: cephadm 2026-03-09T18:43:06.670678+0000 mgr.y (mgr.24991) 117 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:07 vm00 bash[22468]: cephadm 2026-03-09T18:43:06.677837+0000 mgr.y (mgr.24991) 118 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:07 vm00 bash[22468]: cephadm 2026-03-09T18:43:06.685903+0000 mgr.y (mgr.24991) 119 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:07 vm00 bash[22468]: cephadm 2026-03-09T18:43:06.692967+0000 mgr.y (mgr.24991) 120 : cephadm [INF] Upgrade: Setting container_image for all node-exporter 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:07 vm00 bash[22468]: cephadm 2026-03-09T18:43:06.694739+0000 mgr.y (mgr.24991) 121 : cephadm [INF] Upgrade: Setting container_image for all prometheus 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:07 vm00 bash[22468]: cephadm 2026-03-09T18:43:06.697487+0000 mgr.y (mgr.24991) 122 : cephadm [INF] Upgrade: Setting container_image for all alertmanager 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:07 vm00 bash[22468]: cephadm 2026-03-09T18:43:06.699344+0000 mgr.y (mgr.24991) 123 : cephadm [INF] Upgrade: Setting container_image for all grafana 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:07 vm00 bash[22468]: cephadm 2026-03-09T18:43:06.700677+0000 mgr.y (mgr.24991) 124 : cephadm [INF] Upgrade: Setting container_image for all loki 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:07 vm00 bash[22468]: cephadm 2026-03-09T18:43:06.702035+0000 mgr.y (mgr.24991) 125 : cephadm [INF] Upgrade: Setting container_image for all promtail 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:07 vm00 bash[22468]: cephadm 2026-03-09T18:43:06.702787+0000 mgr.y (mgr.24991) 126 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:07 vm00 bash[22468]: cephadm 2026-03-09T18:43:06.751316+0000 mgr.y (mgr.24991) 127 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:07 vm00 bash[22468]: audit 2026-03-09T18:43:06.813495+0000 mon.a (mon.0) 1360 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:07 vm00 bash[17468]: cephadm 2026-03-09T18:43:06.534884+0000 mgr.y (mgr.24991) 112 : cephadm [INF] Detected new or changed devices on vm08 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:07 vm00 bash[17468]: cephadm 2026-03-09T18:43:06.606855+0000 mgr.y (mgr.24991) 113 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:07 vm00 bash[17468]: cephadm 2026-03-09T18:43:06.618401+0000 mgr.y (mgr.24991) 114 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:07 vm00 bash[17468]: cephadm 2026-03-09T18:43:06.628950+0000 mgr.y (mgr.24991) 115 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:07 vm00 bash[17468]: cephadm 2026-03-09T18:43:06.636816+0000 mgr.y (mgr.24991) 116 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:07 vm00 bash[17468]: cephadm 2026-03-09T18:43:06.670678+0000 mgr.y (mgr.24991) 117 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:07 vm00 bash[17468]: cephadm 2026-03-09T18:43:06.677837+0000 mgr.y (mgr.24991) 118 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:07 vm00 bash[17468]: cephadm 2026-03-09T18:43:06.685903+0000 mgr.y (mgr.24991) 119 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:07 vm00 bash[17468]: cephadm 2026-03-09T18:43:06.692967+0000 mgr.y (mgr.24991) 120 : cephadm [INF] Upgrade: Setting container_image for all node-exporter 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:07 vm00 bash[17468]: cephadm 2026-03-09T18:43:06.694739+0000 mgr.y (mgr.24991) 121 : cephadm [INF] Upgrade: Setting container_image for all prometheus 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:07 vm00 bash[17468]: cephadm 2026-03-09T18:43:06.697487+0000 mgr.y (mgr.24991) 122 : cephadm [INF] Upgrade: Setting container_image for all alertmanager 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:07 vm00 bash[17468]: cephadm 2026-03-09T18:43:06.699344+0000 mgr.y (mgr.24991) 123 : cephadm [INF] Upgrade: Setting container_image for all grafana 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:07 vm00 bash[17468]: cephadm 2026-03-09T18:43:06.700677+0000 mgr.y (mgr.24991) 124 : cephadm [INF] Upgrade: Setting container_image for all loki 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:07 vm00 bash[17468]: cephadm 2026-03-09T18:43:06.702035+0000 mgr.y (mgr.24991) 125 : cephadm [INF] Upgrade: Setting container_image for all promtail 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:07 vm00 bash[17468]: cephadm 2026-03-09T18:43:06.702787+0000 mgr.y (mgr.24991) 126 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:07 vm00 bash[17468]: cephadm 2026-03-09T18:43:06.751316+0000 mgr.y (mgr.24991) 127 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:43:08.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:07 vm00 bash[17468]: audit 2026-03-09T18:43:06.813495+0000 mon.a (mon.0) 1360 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:08.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.534884+0000 mgr.y (mgr.24991) 112 : cephadm [INF] Detected new or changed devices on vm08 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.534884+0000 mgr.y (mgr.24991) 112 : cephadm [INF] Detected new or changed devices on vm08 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.606855+0000 mgr.y (mgr.24991) 113 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.606855+0000 mgr.y (mgr.24991) 113 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.618401+0000 mgr.y (mgr.24991) 114 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.618401+0000 mgr.y (mgr.24991) 114 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.628950+0000 mgr.y (mgr.24991) 115 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.628950+0000 mgr.y (mgr.24991) 115 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.636816+0000 mgr.y (mgr.24991) 116 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.636816+0000 mgr.y (mgr.24991) 116 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.670678+0000 mgr.y (mgr.24991) 117 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.670678+0000 mgr.y (mgr.24991) 117 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.677837+0000 mgr.y (mgr.24991) 118 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.677837+0000 mgr.y (mgr.24991) 118 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.685903+0000 mgr.y (mgr.24991) 119 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.685903+0000 mgr.y (mgr.24991) 119 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.692967+0000 mgr.y (mgr.24991) 120 : cephadm [INF] Upgrade: Setting container_image for all node-exporter 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.692967+0000 mgr.y (mgr.24991) 120 : cephadm [INF] Upgrade: Setting container_image for all node-exporter 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.694739+0000 mgr.y (mgr.24991) 121 : cephadm [INF] Upgrade: Setting container_image for all prometheus 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.694739+0000 mgr.y (mgr.24991) 121 : cephadm [INF] Upgrade: Setting container_image for all prometheus 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.697487+0000 mgr.y (mgr.24991) 122 : cephadm [INF] Upgrade: Setting container_image for all alertmanager 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.697487+0000 mgr.y (mgr.24991) 122 : cephadm [INF] Upgrade: Setting container_image for all alertmanager 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.699344+0000 mgr.y (mgr.24991) 123 : cephadm [INF] Upgrade: Setting container_image for all grafana 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.699344+0000 mgr.y (mgr.24991) 123 : cephadm [INF] Upgrade: Setting container_image for all grafana 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.700677+0000 mgr.y (mgr.24991) 124 : cephadm [INF] Upgrade: Setting container_image for all loki 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.700677+0000 mgr.y (mgr.24991) 124 : cephadm [INF] Upgrade: Setting container_image for all loki 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.702035+0000 mgr.y (mgr.24991) 125 : cephadm [INF] Upgrade: Setting container_image for all promtail 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.702035+0000 mgr.y (mgr.24991) 125 : cephadm [INF] Upgrade: Setting container_image for all promtail 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.702787+0000 mgr.y (mgr.24991) 126 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.702787+0000 mgr.y (mgr.24991) 126 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.751316+0000 mgr.y (mgr.24991) 127 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: cephadm 2026-03-09T18:43:06.751316+0000 mgr.y (mgr.24991) 127 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: audit 2026-03-09T18:43:06.813495+0000 mon.a (mon.0) 1360 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:08.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:07 vm08 bash[46122]: audit 2026-03-09T18:43:06.813495+0000 mon.a (mon.0) 1360 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:09.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:08 vm00 bash[22468]: cluster 2026-03-09T18:43:06.956660+0000 mgr.y (mgr.24991) 128 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:09.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:08 vm00 bash[22468]: audit 2026-03-09T18:43:08.331862+0000 mon.a (mon.0) 1361 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:09.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:08 vm00 bash[17468]: cluster 2026-03-09T18:43:06.956660+0000 mgr.y (mgr.24991) 128 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:09.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:08 vm00 bash[17468]: audit 2026-03-09T18:43:08.331862+0000 mon.a (mon.0) 1361 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:09.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:08 vm08 bash[46122]: cluster 2026-03-09T18:43:06.956660+0000 mgr.y (mgr.24991) 128 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:09.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:08 vm08 bash[46122]: cluster 2026-03-09T18:43:06.956660+0000 mgr.y (mgr.24991) 128 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:09.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:08 vm08 bash[46122]: audit 2026-03-09T18:43:08.331862+0000 mon.a (mon.0) 1361 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:09.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:08 vm08 bash[46122]: audit 2026-03-09T18:43:08.331862+0000 mon.a (mon.0) 1361 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:09.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:09 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:43:09] "GET /metrics HTTP/1.1" 200 37487 "" "Prometheus/2.51.0" 2026-03-09T18:43:11.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:10 vm00 bash[22468]: cluster 2026-03-09T18:43:08.957047+0000 mgr.y (mgr.24991) 129 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:11.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:10 vm00 bash[17468]: cluster 2026-03-09T18:43:08.957047+0000 mgr.y (mgr.24991) 129 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:11.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:10 vm08 bash[46122]: cluster 2026-03-09T18:43:08.957047+0000 mgr.y (mgr.24991) 129 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:11.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:10 vm08 bash[46122]: cluster 2026-03-09T18:43:08.957047+0000 mgr.y (mgr.24991) 129 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:13.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:12 vm00 bash[22468]: cluster 2026-03-09T18:43:10.957720+0000 mgr.y (mgr.24991) 130 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:43:13.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:12 vm00 bash[22468]: audit 2026-03-09T18:43:11.405549+0000 mgr.y (mgr.24991) 131 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:43:13.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:12 vm00 bash[17468]: cluster 2026-03-09T18:43:10.957720+0000 mgr.y (mgr.24991) 130 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:43:13.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:12 vm00 bash[17468]: audit 2026-03-09T18:43:11.405549+0000 mgr.y (mgr.24991) 131 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:43:13.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:12 vm08 bash[46122]: cluster 2026-03-09T18:43:10.957720+0000 mgr.y (mgr.24991) 130 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:43:13.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:12 vm08 bash[46122]: cluster 2026-03-09T18:43:10.957720+0000 mgr.y (mgr.24991) 130 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:43:13.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:12 vm08 bash[46122]: audit 2026-03-09T18:43:11.405549+0000 mgr.y (mgr.24991) 131 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:43:13.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:12 vm08 bash[46122]: audit 2026-03-09T18:43:11.405549+0000 mgr.y (mgr.24991) 131 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:43:14.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:13 vm00 bash[22468]: audit 2026-03-09T18:43:13.331820+0000 mon.a (mon.0) 1362 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:43:14.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:13 vm00 bash[17468]: audit 2026-03-09T18:43:13.331820+0000 mon.a (mon.0) 1362 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:43:14.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:13 vm08 bash[46122]: audit 2026-03-09T18:43:13.331820+0000 mon.a (mon.0) 1362 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:43:14.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:13 vm08 bash[46122]: audit 2026-03-09T18:43:13.331820+0000 mon.a (mon.0) 1362 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:43:15.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:14 vm00 bash[22468]: cluster 2026-03-09T18:43:12.958104+0000 mgr.y (mgr.24991) 132 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:15.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:14 vm00 bash[17468]: cluster 2026-03-09T18:43:12.958104+0000 mgr.y (mgr.24991) 132 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:15.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:14 vm08 bash[46122]: cluster 2026-03-09T18:43:12.958104+0000 mgr.y (mgr.24991) 132 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:15.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:14 vm08 bash[46122]: cluster 2026-03-09T18:43:12.958104+0000 mgr.y (mgr.24991) 132 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:17.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:16 vm00 bash[22468]: cluster 2026-03-09T18:43:14.958705+0000 mgr.y (mgr.24991) 133 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:43:17.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:16 vm00 bash[17468]: cluster 2026-03-09T18:43:14.958705+0000 mgr.y (mgr.24991) 133 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:43:17.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:16 vm08 bash[46122]: cluster 2026-03-09T18:43:14.958705+0000 mgr.y (mgr.24991) 133 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:43:17.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:16 vm08 bash[46122]: cluster 2026-03-09T18:43:14.958705+0000 mgr.y (mgr.24991) 133 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:43:19.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:18 vm00 bash[22468]: cluster 2026-03-09T18:43:16.959058+0000 mgr.y (mgr.24991) 134 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:19.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:18 vm00 bash[17468]: cluster 2026-03-09T18:43:16.959058+0000 mgr.y (mgr.24991) 134 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:19.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:18 vm08 bash[46122]: cluster 2026-03-09T18:43:16.959058+0000 mgr.y (mgr.24991) 134 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:19.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:18 vm08 bash[46122]: cluster 2026-03-09T18:43:16.959058+0000 mgr.y (mgr.24991) 134 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:19.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:19 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:43:19] "GET /metrics HTTP/1.1" 200 37551 "" "Prometheus/2.51.0" 2026-03-09T18:43:21.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:20 vm00 bash[17468]: cluster 2026-03-09T18:43:18.959388+0000 mgr.y (mgr.24991) 135 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:21.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:20 vm00 bash[22468]: cluster 2026-03-09T18:43:18.959388+0000 mgr.y (mgr.24991) 135 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:21.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:20 vm08 bash[46122]: cluster 2026-03-09T18:43:18.959388+0000 mgr.y (mgr.24991) 135 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:21.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:20 vm08 bash[46122]: cluster 2026-03-09T18:43:18.959388+0000 mgr.y (mgr.24991) 135 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:22.582 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-09T18:43:23.063 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:43:23.063 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (13m) 83s ago 20m 14.5M - 0.25.0 c8568f914cd2 2a8a29ecee54 2026-03-09T18:43:23.063 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (66s) 22s ago 20m 65.1M - 10.4.0 c8b91775d855 5e0e30d27ab2 2026-03-09T18:43:23.063 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (92s) 83s ago 20m 41.4M - 3.5 e1d6a67b021e ff3da66cebe9 2026-03-09T18:43:23.063 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443,9283,8765 running (89s) 22s ago 23m 464M - 19.2.3-678-ge911bdeb 654f31e6858e e51f08afe84e 2026-03-09T18:43:23.063 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:8443,9283,8765 running (10m) 83s ago 24m 517M - 19.2.3-678-ge911bdeb 654f31e6858e 838266ced7b2 2026-03-09T18:43:23.063 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (24m) 83s ago 24m 71.7M 2048M 17.2.0 e1d6a67b021e 819e8890799a 2026-03-09T18:43:23.063 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (28s) 22s ago 23m 19.5M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a343d2673f4 2026-03-09T18:43:23.063 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (23m) 83s ago 23m 57.2M 2048M 17.2.0 e1d6a67b021e a82073bc5d9c 2026-03-09T18:43:23.063 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (13m) 83s ago 20m 7879k - 1.7.0 72c9c2088986 c2e3e3202fde 2026-03-09T18:43:23.063 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (13m) 22s ago 20m 7923k - 1.7.0 72c9c2088986 7a7d1ed8c801 2026-03-09T18:43:23.063 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (23m) 83s ago 23m 52.2M 4096M 17.2.0 e1d6a67b021e ab692a994bc3 2026-03-09T18:43:23.063 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (22m) 83s ago 22m 53.6M 4096M 17.2.0 e1d6a67b021e d8607e6df9d6 2026-03-09T18:43:23.063 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (22m) 83s ago 22m 48.8M 4096M 17.2.0 e1d6a67b021e 35e072ab4c22 2026-03-09T18:43:23.063 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (22m) 83s ago 22m 54.7M 4096M 17.2.0 e1d6a67b021e 306d680cc55b 2026-03-09T18:43:23.063 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (22m) 22s ago 22m 53.6M 4096M 17.2.0 e1d6a67b021e 781045b06a16 2026-03-09T18:43:23.063 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (21m) 22s ago 21m 52.9M 4096M 17.2.0 e1d6a67b021e 28b71e1b7c1b 2026-03-09T18:43:23.063 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (21m) 22s ago 21m 51.4M 4096M 17.2.0 e1d6a67b021e 80e1a58dd2f5 2026-03-09T18:43:23.063 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (21m) 22s ago 21m 52.0M 4096M 17.2.0 e1d6a67b021e 4f91765b51cf 2026-03-09T18:43:23.063 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (91s) 22s ago 20m 40.7M - 2.51.0 1d3b7f56885b 2dc1789f7bf0 2026-03-09T18:43:23.063 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (20m) 83s ago 20m 87.8M - 17.2.0 e1d6a67b021e 671fa80b7e00 2026-03-09T18:43:23.063 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (20m) 22s ago 20m 89.2M - 17.2.0 e1d6a67b021e 1fbcce983317 2026-03-09T18:43:23.076 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:22 vm00 bash[22468]: cluster 2026-03-09T18:43:20.959939+0000 mgr.y (mgr.24991) 136 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 53 KiB/s rd, 0 B/s wr, 87 op/s 2026-03-09T18:43:23.076 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:22 vm00 bash[22468]: audit 2026-03-09T18:43:21.410436+0000 mgr.y (mgr.24991) 137 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:43:23.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:22 vm00 bash[17468]: cluster 2026-03-09T18:43:20.959939+0000 mgr.y (mgr.24991) 136 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 53 KiB/s rd, 0 B/s wr, 87 op/s 2026-03-09T18:43:23.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:22 vm00 bash[17468]: audit 2026-03-09T18:43:21.410436+0000 mgr.y (mgr.24991) 137 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:43:23.122 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.mon | length == 2'"'"'' 2026-03-09T18:43:23.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:22 vm08 bash[46122]: cluster 2026-03-09T18:43:20.959939+0000 mgr.y (mgr.24991) 136 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 53 KiB/s rd, 0 B/s wr, 87 op/s 2026-03-09T18:43:23.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:22 vm08 bash[46122]: cluster 2026-03-09T18:43:20.959939+0000 mgr.y (mgr.24991) 136 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 53 KiB/s rd, 0 B/s wr, 87 op/s 2026-03-09T18:43:23.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:22 vm08 bash[46122]: audit 2026-03-09T18:43:21.410436+0000 mgr.y (mgr.24991) 137 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:43:23.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:22 vm08 bash[46122]: audit 2026-03-09T18:43:21.410436+0000 mgr.y (mgr.24991) 137 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:43:23.634 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:43:23.678 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade status' 2026-03-09T18:43:23.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:23 vm00 bash[17468]: audit 2026-03-09T18:43:22.488785+0000 mgr.y (mgr.24991) 138 : audit [DBG] from='client.34103 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:23.886 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:23 vm00 bash[17468]: audit 2026-03-09T18:43:23.625657+0000 mon.a (mon.0) 1363 : audit [DBG] from='client.? 192.168.123.100:0/1990377037' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:23.886 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:23 vm00 bash[22468]: audit 2026-03-09T18:43:22.488785+0000 mgr.y (mgr.24991) 138 : audit [DBG] from='client.34103 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:23.886 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:23 vm00 bash[22468]: audit 2026-03-09T18:43:23.625657+0000 mon.a (mon.0) 1363 : audit [DBG] from='client.? 192.168.123.100:0/1990377037' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:24.162 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:43:24.162 INFO:teuthology.orchestra.run.vm00.stdout: "target_image": null, 2026-03-09T18:43:24.162 INFO:teuthology.orchestra.run.vm00.stdout: "in_progress": false, 2026-03-09T18:43:24.163 INFO:teuthology.orchestra.run.vm00.stdout: "which": "", 2026-03-09T18:43:24.163 INFO:teuthology.orchestra.run.vm00.stdout: "services_complete": [], 2026-03-09T18:43:24.163 INFO:teuthology.orchestra.run.vm00.stdout: "progress": null, 2026-03-09T18:43:24.163 INFO:teuthology.orchestra.run.vm00.stdout: "message": "", 2026-03-09T18:43:24.163 INFO:teuthology.orchestra.run.vm00.stdout: "is_paused": false 2026-03-09T18:43:24.163 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:43:24.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:23 vm08 bash[46122]: audit 2026-03-09T18:43:22.488785+0000 mgr.y (mgr.24991) 138 : audit [DBG] from='client.34103 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:24.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:23 vm08 bash[46122]: audit 2026-03-09T18:43:22.488785+0000 mgr.y (mgr.24991) 138 : audit [DBG] from='client.34103 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:24.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:23 vm08 bash[46122]: audit 2026-03-09T18:43:23.625657+0000 mon.a (mon.0) 1363 : audit [DBG] from='client.? 192.168.123.100:0/1990377037' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:24.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:23 vm08 bash[46122]: audit 2026-03-09T18:43:23.625657+0000 mon.a (mon.0) 1363 : audit [DBG] from='client.? 192.168.123.100:0/1990377037' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:24.225 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph health detail' 2026-03-09T18:43:24.753 INFO:teuthology.orchestra.run.vm00.stdout:HEALTH_OK 2026-03-09T18:43:24.815 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mon --hosts $(ceph orch ps | grep mgr.y | awk '"'"'{print $2}'"'"')' 2026-03-09T18:43:25.024 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:24 vm00 bash[22468]: cluster 2026-03-09T18:43:22.960233+0000 mgr.y (mgr.24991) 139 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 53 KiB/s rd, 0 B/s wr, 87 op/s 2026-03-09T18:43:25.025 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:24 vm00 bash[22468]: audit 2026-03-09T18:43:23.061610+0000 mgr.y (mgr.24991) 140 : audit [DBG] from='client.25243 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:25.025 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:24 vm00 bash[22468]: audit 2026-03-09T18:43:24.165839+0000 mgr.y (mgr.24991) 141 : audit [DBG] from='client.15378 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:25.025 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:24 vm00 bash[22468]: audit 2026-03-09T18:43:24.756551+0000 mon.a (mon.0) 1364 : audit [DBG] from='client.? 192.168.123.100:0/4246005733' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:43:25.025 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:24 vm00 bash[17468]: cluster 2026-03-09T18:43:22.960233+0000 mgr.y (mgr.24991) 139 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 53 KiB/s rd, 0 B/s wr, 87 op/s 2026-03-09T18:43:25.025 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:24 vm00 bash[17468]: audit 2026-03-09T18:43:23.061610+0000 mgr.y (mgr.24991) 140 : audit [DBG] from='client.25243 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:25.025 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:24 vm00 bash[17468]: audit 2026-03-09T18:43:24.165839+0000 mgr.y (mgr.24991) 141 : audit [DBG] from='client.15378 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:25.025 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:24 vm00 bash[17468]: audit 2026-03-09T18:43:24.756551+0000 mon.a (mon.0) 1364 : audit [DBG] from='client.? 192.168.123.100:0/4246005733' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:43:25.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:24 vm08 bash[46122]: cluster 2026-03-09T18:43:22.960233+0000 mgr.y (mgr.24991) 139 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 53 KiB/s rd, 0 B/s wr, 87 op/s 2026-03-09T18:43:25.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:24 vm08 bash[46122]: cluster 2026-03-09T18:43:22.960233+0000 mgr.y (mgr.24991) 139 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 53 KiB/s rd, 0 B/s wr, 87 op/s 2026-03-09T18:43:25.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:24 vm08 bash[46122]: audit 2026-03-09T18:43:23.061610+0000 mgr.y (mgr.24991) 140 : audit [DBG] from='client.25243 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:25.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:24 vm08 bash[46122]: audit 2026-03-09T18:43:23.061610+0000 mgr.y (mgr.24991) 140 : audit [DBG] from='client.25243 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:25.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:24 vm08 bash[46122]: audit 2026-03-09T18:43:24.165839+0000 mgr.y (mgr.24991) 141 : audit [DBG] from='client.15378 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:25.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:24 vm08 bash[46122]: audit 2026-03-09T18:43:24.165839+0000 mgr.y (mgr.24991) 141 : audit [DBG] from='client.15378 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:25.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:24 vm08 bash[46122]: audit 2026-03-09T18:43:24.756551+0000 mon.a (mon.0) 1364 : audit [DBG] from='client.? 192.168.123.100:0/4246005733' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:43:25.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:24 vm08 bash[46122]: audit 2026-03-09T18:43:24.756551+0000 mon.a (mon.0) 1364 : audit [DBG] from='client.? 192.168.123.100:0/4246005733' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:43:26.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:25 vm08 bash[46122]: cluster 2026-03-09T18:43:24.960825+0000 mgr.y (mgr.24991) 142 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-09T18:43:26.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:25 vm08 bash[46122]: cluster 2026-03-09T18:43:24.960825+0000 mgr.y (mgr.24991) 142 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-09T18:43:26.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:25 vm08 bash[46122]: audit 2026-03-09T18:43:25.409341+0000 mgr.y (mgr.24991) 143 : audit [DBG] from='client.25264 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:26.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:25 vm08 bash[46122]: audit 2026-03-09T18:43:25.409341+0000 mgr.y (mgr.24991) 143 : audit [DBG] from='client.25264 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:26.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:25 vm08 bash[46122]: audit 2026-03-09T18:43:25.643030+0000 mgr.y (mgr.24991) 144 : audit [DBG] from='client.25270 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "mon", "hosts": "vm00", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:26.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:25 vm08 bash[46122]: audit 2026-03-09T18:43:25.643030+0000 mgr.y (mgr.24991) 144 : audit [DBG] from='client.25270 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "mon", "hosts": "vm00", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:26.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:25 vm00 bash[22468]: cluster 2026-03-09T18:43:24.960825+0000 mgr.y (mgr.24991) 142 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-09T18:43:26.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:25 vm00 bash[22468]: audit 2026-03-09T18:43:25.409341+0000 mgr.y (mgr.24991) 143 : audit [DBG] from='client.25264 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:26.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:25 vm00 bash[22468]: audit 2026-03-09T18:43:25.643030+0000 mgr.y (mgr.24991) 144 : audit [DBG] from='client.25270 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "mon", "hosts": "vm00", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:26.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:25 vm00 bash[17468]: cluster 2026-03-09T18:43:24.960825+0000 mgr.y (mgr.24991) 142 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-09T18:43:26.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:25 vm00 bash[17468]: audit 2026-03-09T18:43:25.409341+0000 mgr.y (mgr.24991) 143 : audit [DBG] from='client.25264 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:26.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:25 vm00 bash[17468]: audit 2026-03-09T18:43:25.643030+0000 mgr.y (mgr.24991) 144 : audit [DBG] from='client.25270 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "mon", "hosts": "vm00", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:27.093 INFO:teuthology.orchestra.run.vm00.stdout:Initiating upgrade to quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:43:27.168 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'while ceph orch upgrade status | jq '"'"'.in_progress'"'"' | grep true && ! ceph orch upgrade status | jq '"'"'.message'"'"' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done' 2026-03-09T18:43:27.729 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:43:28.170 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:43:28.170 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (13m) 89s ago 20m 14.5M - 0.25.0 c8568f914cd2 2a8a29ecee54 2026-03-09T18:43:28.170 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (71s) 28s ago 20m 65.1M - 10.4.0 c8b91775d855 5e0e30d27ab2 2026-03-09T18:43:28.170 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (97s) 89s ago 20m 41.4M - 3.5 e1d6a67b021e ff3da66cebe9 2026-03-09T18:43:28.170 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443,9283,8765 running (94s) 28s ago 23m 464M - 19.2.3-678-ge911bdeb 654f31e6858e e51f08afe84e 2026-03-09T18:43:28.170 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:8443,9283,8765 running (10m) 89s ago 24m 517M - 19.2.3-678-ge911bdeb 654f31e6858e 838266ced7b2 2026-03-09T18:43:28.170 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (24m) 89s ago 24m 71.7M 2048M 17.2.0 e1d6a67b021e 819e8890799a 2026-03-09T18:43:28.170 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (33s) 28s ago 23m 19.5M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a343d2673f4 2026-03-09T18:43:28.170 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (23m) 89s ago 23m 57.2M 2048M 17.2.0 e1d6a67b021e a82073bc5d9c 2026-03-09T18:43:28.171 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (13m) 89s ago 20m 7879k - 1.7.0 72c9c2088986 c2e3e3202fde 2026-03-09T18:43:28.171 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (13m) 28s ago 20m 7923k - 1.7.0 72c9c2088986 7a7d1ed8c801 2026-03-09T18:43:28.171 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (23m) 89s ago 23m 52.2M 4096M 17.2.0 e1d6a67b021e ab692a994bc3 2026-03-09T18:43:28.171 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (22m) 89s ago 22m 53.6M 4096M 17.2.0 e1d6a67b021e d8607e6df9d6 2026-03-09T18:43:28.171 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (22m) 89s ago 22m 48.8M 4096M 17.2.0 e1d6a67b021e 35e072ab4c22 2026-03-09T18:43:28.171 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (22m) 89s ago 22m 54.7M 4096M 17.2.0 e1d6a67b021e 306d680cc55b 2026-03-09T18:43:28.171 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (22m) 28s ago 22m 53.6M 4096M 17.2.0 e1d6a67b021e 781045b06a16 2026-03-09T18:43:28.171 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (21m) 28s ago 21m 52.9M 4096M 17.2.0 e1d6a67b021e 28b71e1b7c1b 2026-03-09T18:43:28.171 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (21m) 28s ago 21m 51.4M 4096M 17.2.0 e1d6a67b021e 80e1a58dd2f5 2026-03-09T18:43:28.171 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (21m) 28s ago 21m 52.0M 4096M 17.2.0 e1d6a67b021e 4f91765b51cf 2026-03-09T18:43:28.171 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (96s) 28s ago 20m 40.7M - 2.51.0 1d3b7f56885b 2dc1789f7bf0 2026-03-09T18:43:28.171 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (20m) 89s ago 20m 87.8M - 17.2.0 e1d6a67b021e 671fa80b7e00 2026-03-09T18:43:28.171 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (20m) 28s ago 20m 89.2M - 17.2.0 e1d6a67b021e 1fbcce983317 2026-03-09T18:43:28.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:28 vm00 bash[17468]: cluster 2026-03-09T18:43:26.961213+0000 mgr.y (mgr.24991) 145 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s 2026-03-09T18:43:28.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:28 vm00 bash[17468]: cephadm 2026-03-09T18:43:27.077231+0000 mgr.y (mgr.24991) 146 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:43:28.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:28 vm00 bash[17468]: audit 2026-03-09T18:43:27.091883+0000 mon.a (mon.0) 1365 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:28.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:28 vm00 bash[17468]: audit 2026-03-09T18:43:27.092704+0000 mon.a (mon.0) 1366 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:28.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:28 vm00 bash[17468]: audit 2026-03-09T18:43:27.096222+0000 mon.a (mon.0) 1367 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:43:28.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:28 vm00 bash[17468]: audit 2026-03-09T18:43:27.096934+0000 mon.a (mon.0) 1368 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:43:28.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:28 vm00 bash[17468]: audit 2026-03-09T18:43:27.104026+0000 mon.a (mon.0) 1369 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:28.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:28 vm00 bash[17468]: cephadm 2026-03-09T18:43:27.162666+0000 mgr.y (mgr.24991) 147 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:43:28.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:28 vm00 bash[17468]: audit 2026-03-09T18:43:27.720776+0000 mgr.y (mgr.24991) 148 : audit [DBG] from='client.15393 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:28.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:28 vm00 bash[22468]: cluster 2026-03-09T18:43:26.961213+0000 mgr.y (mgr.24991) 145 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s 2026-03-09T18:43:28.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:28 vm00 bash[22468]: cephadm 2026-03-09T18:43:27.077231+0000 mgr.y (mgr.24991) 146 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:43:28.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:28 vm00 bash[22468]: audit 2026-03-09T18:43:27.091883+0000 mon.a (mon.0) 1365 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:28.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:28 vm00 bash[22468]: audit 2026-03-09T18:43:27.092704+0000 mon.a (mon.0) 1366 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:28.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:28 vm00 bash[22468]: audit 2026-03-09T18:43:27.096222+0000 mon.a (mon.0) 1367 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:43:28.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:28 vm00 bash[22468]: audit 2026-03-09T18:43:27.096934+0000 mon.a (mon.0) 1368 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:43:28.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:28 vm00 bash[22468]: audit 2026-03-09T18:43:27.104026+0000 mon.a (mon.0) 1369 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:28.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:28 vm00 bash[22468]: cephadm 2026-03-09T18:43:27.162666+0000 mgr.y (mgr.24991) 147 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:43:28.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:28 vm00 bash[22468]: audit 2026-03-09T18:43:27.720776+0000 mgr.y (mgr.24991) 148 : audit [DBG] from='client.15393 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:28.471 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:43:28.471 INFO:teuthology.orchestra.run.vm00.stdout: "mon": { 2026-03-09T18:43:28.471 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2, 2026-03-09T18:43:28.471 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 1 2026-03-09T18:43:28.471 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:43:28.471 INFO:teuthology.orchestra.run.vm00.stdout: "mgr": { 2026-03-09T18:43:28.471 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:43:28.471 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:43:28.471 INFO:teuthology.orchestra.run.vm00.stdout: "osd": { 2026-03-09T18:43:28.471 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-09T18:43:28.471 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:43:28.471 INFO:teuthology.orchestra.run.vm00.stdout: "mds": {}, 2026-03-09T18:43:28.471 INFO:teuthology.orchestra.run.vm00.stdout: "rgw": { 2026-03-09T18:43:28.471 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-09T18:43:28.471 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:43:28.471 INFO:teuthology.orchestra.run.vm00.stdout: "overall": { 2026-03-09T18:43:28.471 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 12, 2026-03-09T18:43:28.471 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-09T18:43:28.471 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:43:28.471 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:43:28.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:28 vm08 bash[46122]: cluster 2026-03-09T18:43:26.961213+0000 mgr.y (mgr.24991) 145 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s 2026-03-09T18:43:28.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:28 vm08 bash[46122]: cluster 2026-03-09T18:43:26.961213+0000 mgr.y (mgr.24991) 145 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s 2026-03-09T18:43:28.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:28 vm08 bash[46122]: cephadm 2026-03-09T18:43:27.077231+0000 mgr.y (mgr.24991) 146 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:43:28.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:28 vm08 bash[46122]: cephadm 2026-03-09T18:43:27.077231+0000 mgr.y (mgr.24991) 146 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:43:28.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:28 vm08 bash[46122]: audit 2026-03-09T18:43:27.091883+0000 mon.a (mon.0) 1365 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:28.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:28 vm08 bash[46122]: audit 2026-03-09T18:43:27.091883+0000 mon.a (mon.0) 1365 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:28.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:28 vm08 bash[46122]: audit 2026-03-09T18:43:27.092704+0000 mon.a (mon.0) 1366 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:28.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:28 vm08 bash[46122]: audit 2026-03-09T18:43:27.092704+0000 mon.a (mon.0) 1366 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:28.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:28 vm08 bash[46122]: audit 2026-03-09T18:43:27.096222+0000 mon.a (mon.0) 1367 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:43:28.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:28 vm08 bash[46122]: audit 2026-03-09T18:43:27.096222+0000 mon.a (mon.0) 1367 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:43:28.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:28 vm08 bash[46122]: audit 2026-03-09T18:43:27.096934+0000 mon.a (mon.0) 1368 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:43:28.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:28 vm08 bash[46122]: audit 2026-03-09T18:43:27.096934+0000 mon.a (mon.0) 1368 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:43:28.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:28 vm08 bash[46122]: audit 2026-03-09T18:43:27.104026+0000 mon.a (mon.0) 1369 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:28.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:28 vm08 bash[46122]: audit 2026-03-09T18:43:27.104026+0000 mon.a (mon.0) 1369 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:28.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:28 vm08 bash[46122]: cephadm 2026-03-09T18:43:27.162666+0000 mgr.y (mgr.24991) 147 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:43:28.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:28 vm08 bash[46122]: cephadm 2026-03-09T18:43:27.162666+0000 mgr.y (mgr.24991) 147 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:43:28.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:28 vm08 bash[46122]: audit 2026-03-09T18:43:27.720776+0000 mgr.y (mgr.24991) 148 : audit [DBG] from='client.15393 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:28.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:28 vm08 bash[46122]: audit 2026-03-09T18:43:27.720776+0000 mgr.y (mgr.24991) 148 : audit [DBG] from='client.15393 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:28.797 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:43:28.797 INFO:teuthology.orchestra.run.vm00.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T18:43:28.797 INFO:teuthology.orchestra.run.vm00.stdout: "in_progress": true, 2026-03-09T18:43:28.797 INFO:teuthology.orchestra.run.vm00.stdout: "which": "Upgrading daemons of type(s) mon on host(s) vm00", 2026-03-09T18:43:28.797 INFO:teuthology.orchestra.run.vm00.stdout: "services_complete": [], 2026-03-09T18:43:28.797 INFO:teuthology.orchestra.run.vm00.stdout: "progress": "0/2 daemons upgraded", 2026-03-09T18:43:28.797 INFO:teuthology.orchestra.run.vm00.stdout: "message": "Currently upgrading mon daemons", 2026-03-09T18:43:28.797 INFO:teuthology.orchestra.run.vm00.stdout: "is_paused": false 2026-03-09T18:43:28.797 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:29 vm00 bash[22468]: audit 2026-03-09T18:43:27.945799+0000 mgr.y (mgr.24991) 149 : audit [DBG] from='client.25279 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:29 vm00 bash[22468]: audit 2026-03-09T18:43:28.167349+0000 mgr.y (mgr.24991) 150 : audit [DBG] from='client.15399 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:29 vm00 bash[22468]: audit 2026-03-09T18:43:28.332191+0000 mon.a (mon.0) 1370 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:29 vm00 bash[22468]: audit 2026-03-09T18:43:28.474130+0000 mon.c (mon.1) 160 : audit [DBG] from='client.? 192.168.123.100:0/1173294168' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:29 vm00 bash[22468]: audit 2026-03-09T18:43:28.715015+0000 mon.a (mon.0) 1371 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:29 vm00 bash[22468]: cephadm 2026-03-09T18:43:28.715325+0000 mgr.y (mgr.24991) 151 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:29 vm00 bash[22468]: cephadm 2026-03-09T18:43:28.715365+0000 mgr.y (mgr.24991) 152 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:29 vm00 bash[22468]: audit 2026-03-09T18:43:28.719669+0000 mon.a (mon.0) 1372 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:29 vm00 bash[22468]: audit 2026-03-09T18:43:28.721050+0000 mon.a (mon.0) 1373 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:29 vm00 bash[22468]: cephadm 2026-03-09T18:43:28.721631+0000 mgr.y (mgr.24991) 153 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:29 vm00 bash[22468]: audit 2026-03-09T18:43:28.726548+0000 mon.a (mon.0) 1374 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:29 vm00 bash[22468]: audit 2026-03-09T18:43:28.729205+0000 mon.a (mon.0) 1375 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:29 vm00 bash[22468]: audit 2026-03-09T18:43:28.729889+0000 mon.a (mon.0) 1376 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["c"]}]: dispatch 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:29 vm00 bash[22468]: cephadm 2026-03-09T18:43:28.730212+0000 mgr.y (mgr.24991) 154 : cephadm [INF] Upgrade: It appears safe to stop mon.c 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:29 vm00 bash[22468]: audit 2026-03-09T18:43:28.800446+0000 mgr.y (mgr.24991) 155 : audit [DBG] from='client.25297 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:29 vm00 bash[17468]: audit 2026-03-09T18:43:27.945799+0000 mgr.y (mgr.24991) 149 : audit [DBG] from='client.25279 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:29 vm00 bash[17468]: audit 2026-03-09T18:43:28.167349+0000 mgr.y (mgr.24991) 150 : audit [DBG] from='client.15399 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:29 vm00 bash[17468]: audit 2026-03-09T18:43:28.332191+0000 mon.a (mon.0) 1370 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:29 vm00 bash[17468]: audit 2026-03-09T18:43:28.474130+0000 mon.c (mon.1) 160 : audit [DBG] from='client.? 192.168.123.100:0/1173294168' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:29 vm00 bash[17468]: audit 2026-03-09T18:43:28.715015+0000 mon.a (mon.0) 1371 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:29 vm00 bash[17468]: cephadm 2026-03-09T18:43:28.715325+0000 mgr.y (mgr.24991) 151 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:29 vm00 bash[17468]: cephadm 2026-03-09T18:43:28.715365+0000 mgr.y (mgr.24991) 152 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:29 vm00 bash[17468]: audit 2026-03-09T18:43:28.719669+0000 mon.a (mon.0) 1372 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:29 vm00 bash[17468]: audit 2026-03-09T18:43:28.721050+0000 mon.a (mon.0) 1373 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:29 vm00 bash[17468]: cephadm 2026-03-09T18:43:28.721631+0000 mgr.y (mgr.24991) 153 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:29 vm00 bash[17468]: audit 2026-03-09T18:43:28.726548+0000 mon.a (mon.0) 1374 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:29 vm00 bash[17468]: audit 2026-03-09T18:43:28.729205+0000 mon.a (mon.0) 1375 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:29 vm00 bash[17468]: audit 2026-03-09T18:43:28.729889+0000 mon.a (mon.0) 1376 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["c"]}]: dispatch 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:29 vm00 bash[17468]: cephadm 2026-03-09T18:43:28.730212+0000 mgr.y (mgr.24991) 154 : cephadm [INF] Upgrade: It appears safe to stop mon.c 2026-03-09T18:43:29.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:29 vm00 bash[17468]: audit 2026-03-09T18:43:28.800446+0000 mgr.y (mgr.24991) 155 : audit [DBG] from='client.25297 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:29.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: audit 2026-03-09T18:43:27.945799+0000 mgr.y (mgr.24991) 149 : audit [DBG] from='client.25279 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:29.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: audit 2026-03-09T18:43:27.945799+0000 mgr.y (mgr.24991) 149 : audit [DBG] from='client.25279 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:29.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: audit 2026-03-09T18:43:28.167349+0000 mgr.y (mgr.24991) 150 : audit [DBG] from='client.15399 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:29.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: audit 2026-03-09T18:43:28.167349+0000 mgr.y (mgr.24991) 150 : audit [DBG] from='client.15399 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:29.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: audit 2026-03-09T18:43:28.332191+0000 mon.a (mon.0) 1370 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:43:29.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: audit 2026-03-09T18:43:28.332191+0000 mon.a (mon.0) 1370 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:43:29.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: audit 2026-03-09T18:43:28.474130+0000 mon.c (mon.1) 160 : audit [DBG] from='client.? 192.168.123.100:0/1173294168' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:29.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: audit 2026-03-09T18:43:28.474130+0000 mon.c (mon.1) 160 : audit [DBG] from='client.? 192.168.123.100:0/1173294168' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:29.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: audit 2026-03-09T18:43:28.715015+0000 mon.a (mon.0) 1371 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:29.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: audit 2026-03-09T18:43:28.715015+0000 mon.a (mon.0) 1371 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:29.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: cephadm 2026-03-09T18:43:28.715325+0000 mgr.y (mgr.24991) 151 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:43:29.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: cephadm 2026-03-09T18:43:28.715325+0000 mgr.y (mgr.24991) 151 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:43:29.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: cephadm 2026-03-09T18:43:28.715365+0000 mgr.y (mgr.24991) 152 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:43:29.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: cephadm 2026-03-09T18:43:28.715365+0000 mgr.y (mgr.24991) 152 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:43:29.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: audit 2026-03-09T18:43:28.719669+0000 mon.a (mon.0) 1372 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:29.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: audit 2026-03-09T18:43:28.719669+0000 mon.a (mon.0) 1372 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:29.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: audit 2026-03-09T18:43:28.721050+0000 mon.a (mon.0) 1373 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:29.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: audit 2026-03-09T18:43:28.721050+0000 mon.a (mon.0) 1373 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:29.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: cephadm 2026-03-09T18:43:28.721631+0000 mgr.y (mgr.24991) 153 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:43:29.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: cephadm 2026-03-09T18:43:28.721631+0000 mgr.y (mgr.24991) 153 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:43:29.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: audit 2026-03-09T18:43:28.726548+0000 mon.a (mon.0) 1374 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:29.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: audit 2026-03-09T18:43:28.726548+0000 mon.a (mon.0) 1374 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:29.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: audit 2026-03-09T18:43:28.729205+0000 mon.a (mon.0) 1375 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T18:43:29.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: audit 2026-03-09T18:43:28.729205+0000 mon.a (mon.0) 1375 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T18:43:29.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: audit 2026-03-09T18:43:28.729889+0000 mon.a (mon.0) 1376 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["c"]}]: dispatch 2026-03-09T18:43:29.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: audit 2026-03-09T18:43:28.729889+0000 mon.a (mon.0) 1376 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["c"]}]: dispatch 2026-03-09T18:43:29.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: cephadm 2026-03-09T18:43:28.730212+0000 mgr.y (mgr.24991) 154 : cephadm [INF] Upgrade: It appears safe to stop mon.c 2026-03-09T18:43:29.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: cephadm 2026-03-09T18:43:28.730212+0000 mgr.y (mgr.24991) 154 : cephadm [INF] Upgrade: It appears safe to stop mon.c 2026-03-09T18:43:29.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: audit 2026-03-09T18:43:28.800446+0000 mgr.y (mgr.24991) 155 : audit [DBG] from='client.25297 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:29.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:29 vm08 bash[46122]: audit 2026-03-09T18:43:28.800446+0000 mgr.y (mgr.24991) 155 : audit [DBG] from='client.25297 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:43:29.804 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:29 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:29.804 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:43:29 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:29.805 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:43:29 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:29.805 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:29 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:43:29] "GET /metrics HTTP/1.1" 200 37558 "" "Prometheus/2.51.0" 2026-03-09T18:43:29.805 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:29 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:29.805 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:29 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:29.805 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:43:29 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:29.805 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:43:29 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:29.805 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:43:29 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:29.805 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:43:29 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:30.072 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:29 vm00 systemd[1]: Stopping Ceph mon.c for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:43:30.072 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:29 vm00 bash[22468]: debug 2026-03-09T18:43:29.852+0000 7fa1cab52700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.c -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T18:43:30.072 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:29 vm00 bash[22468]: debug 2026-03-09T18:43:29.852+0000 7fa1cab52700 -1 mon.c@1(peon) e3 *** Got Signal Terminated *** 2026-03-09T18:43:30.072 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:29 vm00 bash[65417]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-mon-c 2026-03-09T18:43:30.072 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:29 vm00 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mon.c.service: Deactivated successfully. 2026-03-09T18:43:30.072 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:29 vm00 systemd[1]: Stopped Ceph mon.c for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:43:30.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:30 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:30.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:30.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 systemd[1]: Started Ceph mon.c for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:43:30.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.288+0000 7fdb870e0d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T18:43:30.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-09T18:43:30.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 0 pidfile_write: ignore empty --pid-file 2026-03-09T18:43:30.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 0 load: jerasure load: lrc 2026-03-09T18:43:30.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-09T18:43:30.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Git sha 0 2026-03-09T18:43:30.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T18:43:30.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: DB SUMMARY 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: DB Session ID: LN0JSPA88ABWKVZMEUX9 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: CURRENT file: CURRENT 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: MANIFEST file: MANIFEST-000009 size: 2063 Bytes 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-c/store.db dir, Total Num: 1, files: 000042.sst 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-c/store.db: 000040.log size: 3622264 ; 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.error_if_exists: 0 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.create_if_missing: 0 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.env: 0x56077ed46dc0 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.info_log: 0x5607b142b7e0 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.statistics: (nil) 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.use_fsync: 0 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.db_log_dir: 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.wal_dir: 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T18:43:30.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.write_buffer_manager: 0x5607b142f900 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.unordered_write: 0 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.row_cache: None 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.wal_filter: None 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.two_write_queues: 0 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.wal_compression: 0 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.atomic_flush: 0 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T18:43:30.381 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T18:43:30.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T18:43:30.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T18:43:30.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T18:43:30.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T18:43:30.382 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:43:30 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:30.382 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:43:30 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:30.382 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:43:30 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:30.382 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:43:30 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:30.382 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:43:30 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:30.382 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:43:30 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:30.382 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:30 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:30.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T18:43:30.382 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.max_open_files: -1 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Compression algorithms supported: 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: kZSTD supported: 0 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: kXpressCompression supported: 0 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: kZlibCompression supported: 1 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000009 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.merge_operator: 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.compaction_filter: None 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5607b142a3c0) 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: cache_index_and_filter_blocks: 1 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: pin_top_level_index_and_filter: 1 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: index_type: 0 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: data_block_index_type: 0 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: index_shortening: 1 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: data_block_hash_table_util_ratio: 0.750000 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: checksum: 4 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: no_block_cache: 0 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: block_cache: 0x5607b1451350 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: block_cache_name: BinnedLRUCache 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: block_cache_options: 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: capacity : 536870912 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: num_shard_bits : 4 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: strict_capacity_limit : 0 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: high_pri_pool_ratio: 0.000 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: block_cache_compressed: (nil) 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: persistent_cache: (nil) 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: block_size: 4096 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: block_size_deviation: 10 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: block_restart_interval: 16 2026-03-09T18:43:30.383 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: index_block_restart_interval: 1 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: metadata_block_size: 4096 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: partition_filters: 0 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: use_delta_encoding: 1 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: filter_policy: bloomfilter 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: whole_key_filtering: 1 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: verify_compression: 0 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: read_amp_bytes_per_bit: 0 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: format_version: 5 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: enable_index_compression: 1 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: block_align: 0 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: max_auto_readahead_size: 262144 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: prepopulate_block_cache: 0 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: initial_auto_readahead_size: 8192 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: num_file_reads_for_auto_readahead: 2 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.compression: NoCompression 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.num_levels: 7 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T18:43:30.384 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.bloom_locality: 0 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.ttl: 2592000 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.enable_blob_files: false 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.min_blob_size: 0 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.292+0000 7fdb870e0d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.296+0000 7fdb870e0d80 3 rocksdb: [table/block_based/block_based_table_reader.cc:721] At least one SST file opened without unique ID to verify: 42.sst 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.296+0000 7fdb870e0d80 4 rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.296+0000 7fdb870e0d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000009 succeeded,manifest_file_number is 9, next_file_number is 44, last_sequence is 23629, log_number is 40,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.296+0000 7fdb870e0d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 40 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.296+0000 7fdb870e0d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 56a5d3a7-c1c3-40c5-814c-1d81e519a908 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.296+0000 7fdb870e0d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773081810298214, "job": 1, "event": "recovery_started", "wal_files": [40]} 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.296+0000 7fdb870e0d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #40 mode 2 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.308+0000 7fdb870e0d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773081810309373, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 45, "file_size": 2203819, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23634, "largest_seqno": 25831, "table_properties": {"data_size": 2195774, "index_size": 4764, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 22867, "raw_average_key_size": 25, "raw_value_size": 2177061, "raw_average_value_size": 2435, "num_data_blocks": 216, "num_entries": 894, "num_filter_entries": 894, "num_deletions": 10, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773081810, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "56a5d3a7-c1c3-40c5-814c-1d81e519a908", "db_session_id": "LN0JSPA88ABWKVZMEUX9", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}} 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.308+0000 7fdb870e0d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773081810309458, "job": 1, "event": "recovery_finished"} 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.308+0000 7fdb870e0d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 47 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.308+0000 7fdb870e0d80 4 rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.308+0000 7fdb870e0d80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-c/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.308+0000 7fdb870e0d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5607b1452e00 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.308+0000 7fdb870e0d80 4 rocksdb: DB pointer 0x5607b155e000 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.312+0000 7fdb870e0d80 0 starting mon.c rank 1 at public addrs [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] at bind addrs [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon_data /var/lib/ceph/mon/ceph-c fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.312+0000 7fdb870e0d80 1 mon.c@-1(???) e3 preinit fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.312+0000 7fdb870e0d80 0 mon.c@-1(???).mds e1 new map 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.312+0000 7fdb870e0d80 0 mon.c@-1(???).mds e1 print_map 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: e1 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: btime 1970-01-01T00:00:00:000000+0000 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T18:43:30.385 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T18:43:30.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: legacy client fscid: -1 2026-03-09T18:43:30.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: 2026-03-09T18:43:30.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: No filesystems configured 2026-03-09T18:43:30.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.312+0000 7fdb870e0d80 0 mon.c@-1(???).osd e100 crush map has features 3314933000854323200, adjusting msgr requires 2026-03-09T18:43:30.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.312+0000 7fdb870e0d80 0 mon.c@-1(???).osd e100 crush map has features 432629239337189376, adjusting msgr requires 2026-03-09T18:43:30.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.312+0000 7fdb870e0d80 0 mon.c@-1(???).osd e100 crush map has features 432629239337189376, adjusting msgr requires 2026-03-09T18:43:30.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.312+0000 7fdb870e0d80 0 mon.c@-1(???).osd e100 crush map has features 432629239337189376, adjusting msgr requires 2026-03-09T18:43:30.386 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: debug 2026-03-09T18:43:30.312+0000 7fdb870e0d80 1 mon.c@-1(???).paxosservice(auth 1..25) refresh upgraded, format 0 -> 3 2026-03-09T18:43:30.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: cluster 2026-03-09T18:43:28.961540+0000 mgr.y (mgr.24991) 156 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s 2026-03-09T18:43:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: cluster 2026-03-09T18:43:28.961540+0000 mgr.y (mgr.24991) 156 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s 2026-03-09T18:43:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: cephadm 2026-03-09T18:43:29.213211+0000 mgr.y (mgr.24991) 157 : cephadm [INF] Upgrade: Updating mon.c 2026-03-09T18:43:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: cephadm 2026-03-09T18:43:29.213211+0000 mgr.y (mgr.24991) 157 : cephadm [INF] Upgrade: Updating mon.c 2026-03-09T18:43:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: audit 2026-03-09T18:43:29.219063+0000 mon.a (mon.0) 1377 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: audit 2026-03-09T18:43:29.219063+0000 mon.a (mon.0) 1377 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: audit 2026-03-09T18:43:29.221865+0000 mon.a (mon.0) 1378 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:43:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: audit 2026-03-09T18:43:29.221865+0000 mon.a (mon.0) 1378 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:43:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: audit 2026-03-09T18:43:29.222246+0000 mon.a (mon.0) 1379 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:43:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: audit 2026-03-09T18:43:29.222246+0000 mon.a (mon.0) 1379 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:43:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: audit 2026-03-09T18:43:29.222617+0000 mon.a (mon.0) 1380 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:43:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: audit 2026-03-09T18:43:29.222617+0000 mon.a (mon.0) 1380 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:43:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: cephadm 2026-03-09T18:43:29.223173+0000 mgr.y (mgr.24991) 158 : cephadm [INF] Deploying daemon mon.c on vm00 2026-03-09T18:43:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:30 vm00 bash[65531]: cephadm 2026-03-09T18:43:29.223173+0000 mgr.y (mgr.24991) 158 : cephadm [INF] Deploying daemon mon.c on vm00 2026-03-09T18:43:30.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:30 vm00 bash[17468]: cluster 2026-03-09T18:43:28.961540+0000 mgr.y (mgr.24991) 156 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s 2026-03-09T18:43:30.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:30 vm00 bash[17468]: cephadm 2026-03-09T18:43:29.213211+0000 mgr.y (mgr.24991) 157 : cephadm [INF] Upgrade: Updating mon.c 2026-03-09T18:43:30.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:30 vm00 bash[17468]: audit 2026-03-09T18:43:29.219063+0000 mon.a (mon.0) 1377 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:30.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:30 vm00 bash[17468]: audit 2026-03-09T18:43:29.221865+0000 mon.a (mon.0) 1378 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:43:30.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:30 vm00 bash[17468]: audit 2026-03-09T18:43:29.222246+0000 mon.a (mon.0) 1379 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:43:30.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:30 vm00 bash[17468]: audit 2026-03-09T18:43:29.222617+0000 mon.a (mon.0) 1380 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:43:30.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:30 vm00 bash[17468]: cephadm 2026-03-09T18:43:29.223173+0000 mgr.y (mgr.24991) 158 : cephadm [INF] Deploying daemon mon.c on vm00 2026-03-09T18:43:30.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:30 vm08 bash[46122]: cluster 2026-03-09T18:43:28.961540+0000 mgr.y (mgr.24991) 156 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s 2026-03-09T18:43:30.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:30 vm08 bash[46122]: cluster 2026-03-09T18:43:28.961540+0000 mgr.y (mgr.24991) 156 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s 2026-03-09T18:43:30.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:30 vm08 bash[46122]: cephadm 2026-03-09T18:43:29.213211+0000 mgr.y (mgr.24991) 157 : cephadm [INF] Upgrade: Updating mon.c 2026-03-09T18:43:30.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:30 vm08 bash[46122]: cephadm 2026-03-09T18:43:29.213211+0000 mgr.y (mgr.24991) 157 : cephadm [INF] Upgrade: Updating mon.c 2026-03-09T18:43:30.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:30 vm08 bash[46122]: audit 2026-03-09T18:43:29.219063+0000 mon.a (mon.0) 1377 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:30.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:30 vm08 bash[46122]: audit 2026-03-09T18:43:29.219063+0000 mon.a (mon.0) 1377 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:30.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:30 vm08 bash[46122]: audit 2026-03-09T18:43:29.221865+0000 mon.a (mon.0) 1378 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:43:30.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:30 vm08 bash[46122]: audit 2026-03-09T18:43:29.221865+0000 mon.a (mon.0) 1378 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:43:30.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:30 vm08 bash[46122]: audit 2026-03-09T18:43:29.222246+0000 mon.a (mon.0) 1379 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:43:30.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:30 vm08 bash[46122]: audit 2026-03-09T18:43:29.222246+0000 mon.a (mon.0) 1379 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:43:30.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:30 vm08 bash[46122]: audit 2026-03-09T18:43:29.222617+0000 mon.a (mon.0) 1380 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:43:30.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:30 vm08 bash[46122]: audit 2026-03-09T18:43:29.222617+0000 mon.a (mon.0) 1380 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:43:30.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:30 vm08 bash[46122]: cephadm 2026-03-09T18:43:29.223173+0000 mgr.y (mgr.24991) 158 : cephadm [INF] Deploying daemon mon.c on vm00 2026-03-09T18:43:30.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:30 vm08 bash[46122]: cephadm 2026-03-09T18:43:29.223173+0000 mgr.y (mgr.24991) 158 : cephadm [INF] Deploying daemon mon.c on vm00 2026-03-09T18:43:31.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:31 vm00 bash[65531]: cluster 2026-03-09T18:43:30.526276+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:31 vm00 bash[65531]: cluster 2026-03-09T18:43:30.526276+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:31 vm00 bash[65531]: cluster 2026-03-09T18:43:30.528478+0000 mon.a (mon.0) 1381 : cluster [INF] mon.a calling monitor election 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:31 vm00 bash[65531]: cluster 2026-03-09T18:43:30.528478+0000 mon.a (mon.0) 1381 : cluster [INF] mon.a calling monitor election 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:31 vm00 bash[65531]: cluster 2026-03-09T18:43:30.531680+0000 mon.a (mon.0) 1382 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:31 vm00 bash[65531]: cluster 2026-03-09T18:43:30.531680+0000 mon.a (mon.0) 1382 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:31 vm00 bash[65531]: cluster 2026-03-09T18:43:30.539290+0000 mon.a (mon.0) 1383 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0],b=[v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0],c=[v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0]} 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:31 vm00 bash[65531]: cluster 2026-03-09T18:43:30.539290+0000 mon.a (mon.0) 1383 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0],b=[v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0],c=[v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0]} 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:31 vm00 bash[65531]: cluster 2026-03-09T18:43:30.539416+0000 mon.a (mon.0) 1384 : cluster [DBG] fsmap 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:31 vm00 bash[65531]: cluster 2026-03-09T18:43:30.539416+0000 mon.a (mon.0) 1384 : cluster [DBG] fsmap 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:31 vm00 bash[65531]: cluster 2026-03-09T18:43:30.539635+0000 mon.a (mon.0) 1385 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:31 vm00 bash[65531]: cluster 2026-03-09T18:43:30.539635+0000 mon.a (mon.0) 1385 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:31 vm00 bash[65531]: cluster 2026-03-09T18:43:30.540273+0000 mon.a (mon.0) 1386 : cluster [DBG] mgrmap e42: y(active, since 108s), standbys: x 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:31 vm00 bash[65531]: cluster 2026-03-09T18:43:30.540273+0000 mon.a (mon.0) 1386 : cluster [DBG] mgrmap e42: y(active, since 108s), standbys: x 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:31 vm00 bash[65531]: cluster 2026-03-09T18:43:30.548489+0000 mon.a (mon.0) 1387 : cluster [INF] overall HEALTH_OK 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:31 vm00 bash[65531]: cluster 2026-03-09T18:43:30.548489+0000 mon.a (mon.0) 1387 : cluster [INF] overall HEALTH_OK 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:31 vm00 bash[65531]: audit 2026-03-09T18:43:30.552544+0000 mon.a (mon.0) 1388 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:31 vm00 bash[65531]: audit 2026-03-09T18:43:30.552544+0000 mon.a (mon.0) 1388 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:31 vm00 bash[65531]: audit 2026-03-09T18:43:30.557144+0000 mon.a (mon.0) 1389 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:31 vm00 bash[65531]: audit 2026-03-09T18:43:30.557144+0000 mon.a (mon.0) 1389 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:31 vm00 bash[65531]: audit 2026-03-09T18:43:30.557867+0000 mon.a (mon.0) 1390 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:31 vm00 bash[65531]: audit 2026-03-09T18:43:30.557867+0000 mon.a (mon.0) 1390 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:31 vm00 bash[17468]: cluster 2026-03-09T18:43:30.526276+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:31 vm00 bash[17468]: cluster 2026-03-09T18:43:30.528478+0000 mon.a (mon.0) 1381 : cluster [INF] mon.a calling monitor election 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:31 vm00 bash[17468]: cluster 2026-03-09T18:43:30.531680+0000 mon.a (mon.0) 1382 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:31 vm00 bash[17468]: cluster 2026-03-09T18:43:30.539290+0000 mon.a (mon.0) 1383 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0],b=[v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0],c=[v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0]} 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:31 vm00 bash[17468]: cluster 2026-03-09T18:43:30.539416+0000 mon.a (mon.0) 1384 : cluster [DBG] fsmap 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:31 vm00 bash[17468]: cluster 2026-03-09T18:43:30.539635+0000 mon.a (mon.0) 1385 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:31 vm00 bash[17468]: cluster 2026-03-09T18:43:30.540273+0000 mon.a (mon.0) 1386 : cluster [DBG] mgrmap e42: y(active, since 108s), standbys: x 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:31 vm00 bash[17468]: cluster 2026-03-09T18:43:30.548489+0000 mon.a (mon.0) 1387 : cluster [INF] overall HEALTH_OK 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:31 vm00 bash[17468]: audit 2026-03-09T18:43:30.552544+0000 mon.a (mon.0) 1388 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:31 vm00 bash[17468]: audit 2026-03-09T18:43:30.557144+0000 mon.a (mon.0) 1389 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:31.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:31 vm00 bash[17468]: audit 2026-03-09T18:43:30.557867+0000 mon.a (mon.0) 1390 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:31 vm08 bash[46122]: cluster 2026-03-09T18:43:30.526276+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T18:43:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:31 vm08 bash[46122]: cluster 2026-03-09T18:43:30.526276+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T18:43:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:31 vm08 bash[46122]: cluster 2026-03-09T18:43:30.528478+0000 mon.a (mon.0) 1381 : cluster [INF] mon.a calling monitor election 2026-03-09T18:43:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:31 vm08 bash[46122]: cluster 2026-03-09T18:43:30.528478+0000 mon.a (mon.0) 1381 : cluster [INF] mon.a calling monitor election 2026-03-09T18:43:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:31 vm08 bash[46122]: cluster 2026-03-09T18:43:30.531680+0000 mon.a (mon.0) 1382 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T18:43:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:31 vm08 bash[46122]: cluster 2026-03-09T18:43:30.531680+0000 mon.a (mon.0) 1382 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T18:43:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:31 vm08 bash[46122]: cluster 2026-03-09T18:43:30.539290+0000 mon.a (mon.0) 1383 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0],b=[v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0],c=[v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0]} 2026-03-09T18:43:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:31 vm08 bash[46122]: cluster 2026-03-09T18:43:30.539290+0000 mon.a (mon.0) 1383 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0],b=[v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0],c=[v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0]} 2026-03-09T18:43:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:31 vm08 bash[46122]: cluster 2026-03-09T18:43:30.539416+0000 mon.a (mon.0) 1384 : cluster [DBG] fsmap 2026-03-09T18:43:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:31 vm08 bash[46122]: cluster 2026-03-09T18:43:30.539416+0000 mon.a (mon.0) 1384 : cluster [DBG] fsmap 2026-03-09T18:43:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:31 vm08 bash[46122]: cluster 2026-03-09T18:43:30.539635+0000 mon.a (mon.0) 1385 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T18:43:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:31 vm08 bash[46122]: cluster 2026-03-09T18:43:30.539635+0000 mon.a (mon.0) 1385 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T18:43:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:31 vm08 bash[46122]: cluster 2026-03-09T18:43:30.540273+0000 mon.a (mon.0) 1386 : cluster [DBG] mgrmap e42: y(active, since 108s), standbys: x 2026-03-09T18:43:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:31 vm08 bash[46122]: cluster 2026-03-09T18:43:30.540273+0000 mon.a (mon.0) 1386 : cluster [DBG] mgrmap e42: y(active, since 108s), standbys: x 2026-03-09T18:43:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:31 vm08 bash[46122]: cluster 2026-03-09T18:43:30.548489+0000 mon.a (mon.0) 1387 : cluster [INF] overall HEALTH_OK 2026-03-09T18:43:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:31 vm08 bash[46122]: cluster 2026-03-09T18:43:30.548489+0000 mon.a (mon.0) 1387 : cluster [INF] overall HEALTH_OK 2026-03-09T18:43:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:31 vm08 bash[46122]: audit 2026-03-09T18:43:30.552544+0000 mon.a (mon.0) 1388 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:31 vm08 bash[46122]: audit 2026-03-09T18:43:30.552544+0000 mon.a (mon.0) 1388 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:31 vm08 bash[46122]: audit 2026-03-09T18:43:30.557144+0000 mon.a (mon.0) 1389 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:31 vm08 bash[46122]: audit 2026-03-09T18:43:30.557144+0000 mon.a (mon.0) 1389 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:31 vm08 bash[46122]: audit 2026-03-09T18:43:30.557867+0000 mon.a (mon.0) 1390 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:31 vm08 bash[46122]: audit 2026-03-09T18:43:30.557867+0000 mon.a (mon.0) 1390 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:33.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:32 vm00 bash[65531]: cluster 2026-03-09T18:43:30.962169+0000 mgr.y (mgr.24991) 159 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-09T18:43:33.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:32 vm00 bash[65531]: cluster 2026-03-09T18:43:30.962169+0000 mgr.y (mgr.24991) 159 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-09T18:43:33.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:32 vm00 bash[65531]: audit 2026-03-09T18:43:31.416275+0000 mgr.y (mgr.24991) 160 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:43:33.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:32 vm00 bash[65531]: audit 2026-03-09T18:43:31.416275+0000 mgr.y (mgr.24991) 160 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:43:33.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:32 vm00 bash[17468]: cluster 2026-03-09T18:43:30.962169+0000 mgr.y (mgr.24991) 159 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-09T18:43:33.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:32 vm00 bash[17468]: audit 2026-03-09T18:43:31.416275+0000 mgr.y (mgr.24991) 160 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:43:33.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:32 vm08 bash[46122]: cluster 2026-03-09T18:43:30.962169+0000 mgr.y (mgr.24991) 159 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-09T18:43:33.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:32 vm08 bash[46122]: cluster 2026-03-09T18:43:30.962169+0000 mgr.y (mgr.24991) 159 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-09T18:43:33.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:32 vm08 bash[46122]: audit 2026-03-09T18:43:31.416275+0000 mgr.y (mgr.24991) 160 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:43:33.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:32 vm08 bash[46122]: audit 2026-03-09T18:43:31.416275+0000 mgr.y (mgr.24991) 160 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:43:35.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:34 vm00 bash[65531]: cluster 2026-03-09T18:43:32.962452+0000 mgr.y (mgr.24991) 161 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s 2026-03-09T18:43:35.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:34 vm00 bash[65531]: cluster 2026-03-09T18:43:32.962452+0000 mgr.y (mgr.24991) 161 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s 2026-03-09T18:43:35.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:34 vm00 bash[17468]: cluster 2026-03-09T18:43:32.962452+0000 mgr.y (mgr.24991) 161 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s 2026-03-09T18:43:35.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:34 vm08 bash[46122]: cluster 2026-03-09T18:43:32.962452+0000 mgr.y (mgr.24991) 161 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s 2026-03-09T18:43:35.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:34 vm08 bash[46122]: cluster 2026-03-09T18:43:32.962452+0000 mgr.y (mgr.24991) 161 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s 2026-03-09T18:43:35.628 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:35 vm00 bash[53976]: debug 2026-03-09T18:43:35.320+0000 7f7a3ff17640 -1 mgr.server handle_report got status from non-daemon mon.c 2026-03-09T18:43:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:36 vm00 bash[65531]: cluster 2026-03-09T18:43:34.962997+0000 mgr.y (mgr.24991) 162 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 21 KiB/s rd, 0 B/s wr, 33 op/s 2026-03-09T18:43:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:36 vm00 bash[65531]: cluster 2026-03-09T18:43:34.962997+0000 mgr.y (mgr.24991) 162 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 21 KiB/s rd, 0 B/s wr, 33 op/s 2026-03-09T18:43:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:36 vm00 bash[65531]: audit 2026-03-09T18:43:36.018784+0000 mon.a (mon.0) 1391 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:36 vm00 bash[65531]: audit 2026-03-09T18:43:36.018784+0000 mon.a (mon.0) 1391 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:36 vm00 bash[65531]: audit 2026-03-09T18:43:36.028650+0000 mon.a (mon.0) 1392 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:36 vm00 bash[65531]: audit 2026-03-09T18:43:36.028650+0000 mon.a (mon.0) 1392 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:36 vm00 bash[65531]: audit 2026-03-09T18:43:36.613527+0000 mon.a (mon.0) 1393 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:36 vm00 bash[65531]: audit 2026-03-09T18:43:36.613527+0000 mon.a (mon.0) 1393 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:36 vm00 bash[65531]: audit 2026-03-09T18:43:36.618932+0000 mon.a (mon.0) 1394 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:36 vm00 bash[65531]: audit 2026-03-09T18:43:36.618932+0000 mon.a (mon.0) 1394 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:36 vm00 bash[17468]: cluster 2026-03-09T18:43:34.962997+0000 mgr.y (mgr.24991) 162 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 21 KiB/s rd, 0 B/s wr, 33 op/s 2026-03-09T18:43:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:36 vm00 bash[17468]: audit 2026-03-09T18:43:36.018784+0000 mon.a (mon.0) 1391 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:36 vm00 bash[17468]: audit 2026-03-09T18:43:36.028650+0000 mon.a (mon.0) 1392 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:37.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:36 vm00 bash[17468]: audit 2026-03-09T18:43:36.613527+0000 mon.a (mon.0) 1393 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:37.130 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:36 vm00 bash[17468]: audit 2026-03-09T18:43:36.618932+0000 mon.a (mon.0) 1394 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:36 vm08 bash[46122]: cluster 2026-03-09T18:43:34.962997+0000 mgr.y (mgr.24991) 162 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 21 KiB/s rd, 0 B/s wr, 33 op/s 2026-03-09T18:43:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:36 vm08 bash[46122]: cluster 2026-03-09T18:43:34.962997+0000 mgr.y (mgr.24991) 162 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 21 KiB/s rd, 0 B/s wr, 33 op/s 2026-03-09T18:43:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:36 vm08 bash[46122]: audit 2026-03-09T18:43:36.018784+0000 mon.a (mon.0) 1391 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:36 vm08 bash[46122]: audit 2026-03-09T18:43:36.018784+0000 mon.a (mon.0) 1391 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:36 vm08 bash[46122]: audit 2026-03-09T18:43:36.028650+0000 mon.a (mon.0) 1392 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:36 vm08 bash[46122]: audit 2026-03-09T18:43:36.028650+0000 mon.a (mon.0) 1392 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:36 vm08 bash[46122]: audit 2026-03-09T18:43:36.613527+0000 mon.a (mon.0) 1393 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:36 vm08 bash[46122]: audit 2026-03-09T18:43:36.613527+0000 mon.a (mon.0) 1393 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:36 vm08 bash[46122]: audit 2026-03-09T18:43:36.618932+0000 mon.a (mon.0) 1394 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:36 vm08 bash[46122]: audit 2026-03-09T18:43:36.618932+0000 mon.a (mon.0) 1394 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:39.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:38 vm00 bash[65531]: cluster 2026-03-09T18:43:36.963387+0000 mgr.y (mgr.24991) 163 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:39.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:38 vm00 bash[65531]: cluster 2026-03-09T18:43:36.963387+0000 mgr.y (mgr.24991) 163 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:39.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:38 vm00 bash[17468]: cluster 2026-03-09T18:43:36.963387+0000 mgr.y (mgr.24991) 163 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:39.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:38 vm08 bash[46122]: cluster 2026-03-09T18:43:36.963387+0000 mgr.y (mgr.24991) 163 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:39.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:38 vm08 bash[46122]: cluster 2026-03-09T18:43:36.963387+0000 mgr.y (mgr.24991) 163 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:39.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:39 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:43:39] "GET /metrics HTTP/1.1" 200 37558 "" "Prometheus/2.51.0" 2026-03-09T18:43:40.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:39 vm08 bash[46122]: cluster 2026-03-09T18:43:38.963805+0000 mgr.y (mgr.24991) 164 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:40.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:39 vm08 bash[46122]: cluster 2026-03-09T18:43:38.963805+0000 mgr.y (mgr.24991) 164 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:40.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:39 vm00 bash[17468]: cluster 2026-03-09T18:43:38.963805+0000 mgr.y (mgr.24991) 164 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:40.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:39 vm00 bash[65531]: cluster 2026-03-09T18:43:38.963805+0000 mgr.y (mgr.24991) 164 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:40.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:39 vm00 bash[65531]: cluster 2026-03-09T18:43:38.963805+0000 mgr.y (mgr.24991) 164 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:42 vm00 bash[65531]: cluster 2026-03-09T18:43:40.964339+0000 mgr.y (mgr.24991) 165 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:42 vm00 bash[65531]: cluster 2026-03-09T18:43:40.964339+0000 mgr.y (mgr.24991) 165 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:42 vm00 bash[65531]: audit 2026-03-09T18:43:41.426158+0000 mgr.y (mgr.24991) 166 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:42 vm00 bash[65531]: audit 2026-03-09T18:43:41.426158+0000 mgr.y (mgr.24991) 166 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:42 vm00 bash[65531]: audit 2026-03-09T18:43:42.427210+0000 mon.a (mon.0) 1395 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:42 vm00 bash[65531]: audit 2026-03-09T18:43:42.427210+0000 mon.a (mon.0) 1395 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:42 vm00 bash[65531]: audit 2026-03-09T18:43:42.435566+0000 mon.a (mon.0) 1396 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:42 vm00 bash[65531]: audit 2026-03-09T18:43:42.435566+0000 mon.a (mon.0) 1396 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:42 vm00 bash[65531]: audit 2026-03-09T18:43:42.437232+0000 mon.a (mon.0) 1397 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:42 vm00 bash[65531]: audit 2026-03-09T18:43:42.437232+0000 mon.a (mon.0) 1397 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:42 vm00 bash[65531]: audit 2026-03-09T18:43:42.437893+0000 mon.a (mon.0) 1398 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:42 vm00 bash[65531]: audit 2026-03-09T18:43:42.437893+0000 mon.a (mon.0) 1398 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:42 vm00 bash[65531]: audit 2026-03-09T18:43:42.443614+0000 mon.a (mon.0) 1399 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:42 vm00 bash[65531]: audit 2026-03-09T18:43:42.443614+0000 mon.a (mon.0) 1399 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:42 vm00 bash[65531]: audit 2026-03-09T18:43:42.491679+0000 mon.a (mon.0) 1400 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:42 vm00 bash[65531]: audit 2026-03-09T18:43:42.491679+0000 mon.a (mon.0) 1400 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:42 vm00 bash[65531]: audit 2026-03-09T18:43:42.492829+0000 mon.a (mon.0) 1401 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:42 vm00 bash[65531]: audit 2026-03-09T18:43:42.492829+0000 mon.a (mon.0) 1401 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:42 vm00 bash[65531]: audit 2026-03-09T18:43:42.493539+0000 mon.a (mon.0) 1402 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:42 vm00 bash[65531]: audit 2026-03-09T18:43:42.493539+0000 mon.a (mon.0) 1402 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:42 vm00 bash[65531]: audit 2026-03-09T18:43:42.494006+0000 mon.a (mon.0) 1403 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["a"]}]: dispatch 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:42 vm00 bash[65531]: audit 2026-03-09T18:43:42.494006+0000 mon.a (mon.0) 1403 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["a"]}]: dispatch 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:42 vm00 bash[17468]: cluster 2026-03-09T18:43:40.964339+0000 mgr.y (mgr.24991) 165 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:42 vm00 bash[17468]: audit 2026-03-09T18:43:41.426158+0000 mgr.y (mgr.24991) 166 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:42 vm00 bash[17468]: audit 2026-03-09T18:43:42.427210+0000 mon.a (mon.0) 1395 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:42 vm00 bash[17468]: audit 2026-03-09T18:43:42.435566+0000 mon.a (mon.0) 1396 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:42 vm00 bash[17468]: audit 2026-03-09T18:43:42.437232+0000 mon.a (mon.0) 1397 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:42 vm00 bash[17468]: audit 2026-03-09T18:43:42.437893+0000 mon.a (mon.0) 1398 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:42 vm00 bash[17468]: audit 2026-03-09T18:43:42.443614+0000 mon.a (mon.0) 1399 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:42 vm00 bash[17468]: audit 2026-03-09T18:43:42.491679+0000 mon.a (mon.0) 1400 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:42 vm00 bash[17468]: audit 2026-03-09T18:43:42.492829+0000 mon.a (mon.0) 1401 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:42 vm00 bash[17468]: audit 2026-03-09T18:43:42.493539+0000 mon.a (mon.0) 1402 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T18:43:43.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:42 vm00 bash[17468]: audit 2026-03-09T18:43:42.494006+0000 mon.a (mon.0) 1403 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["a"]}]: dispatch 2026-03-09T18:43:43.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:42 vm08 bash[46122]: cluster 2026-03-09T18:43:40.964339+0000 mgr.y (mgr.24991) 165 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:43:43.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:42 vm08 bash[46122]: cluster 2026-03-09T18:43:40.964339+0000 mgr.y (mgr.24991) 165 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:43:43.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:42 vm08 bash[46122]: audit 2026-03-09T18:43:41.426158+0000 mgr.y (mgr.24991) 166 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:43:43.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:42 vm08 bash[46122]: audit 2026-03-09T18:43:41.426158+0000 mgr.y (mgr.24991) 166 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:43:43.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:42 vm08 bash[46122]: audit 2026-03-09T18:43:42.427210+0000 mon.a (mon.0) 1395 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:43.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:42 vm08 bash[46122]: audit 2026-03-09T18:43:42.427210+0000 mon.a (mon.0) 1395 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:43.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:42 vm08 bash[46122]: audit 2026-03-09T18:43:42.435566+0000 mon.a (mon.0) 1396 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:43.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:42 vm08 bash[46122]: audit 2026-03-09T18:43:42.435566+0000 mon.a (mon.0) 1396 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:43.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:42 vm08 bash[46122]: audit 2026-03-09T18:43:42.437232+0000 mon.a (mon.0) 1397 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:43:43.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:42 vm08 bash[46122]: audit 2026-03-09T18:43:42.437232+0000 mon.a (mon.0) 1397 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:43:43.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:42 vm08 bash[46122]: audit 2026-03-09T18:43:42.437893+0000 mon.a (mon.0) 1398 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:43:43.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:42 vm08 bash[46122]: audit 2026-03-09T18:43:42.437893+0000 mon.a (mon.0) 1398 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:43:43.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:42 vm08 bash[46122]: audit 2026-03-09T18:43:42.443614+0000 mon.a (mon.0) 1399 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:43.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:42 vm08 bash[46122]: audit 2026-03-09T18:43:42.443614+0000 mon.a (mon.0) 1399 : audit [INF] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' 2026-03-09T18:43:43.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:42 vm08 bash[46122]: audit 2026-03-09T18:43:42.491679+0000 mon.a (mon.0) 1400 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:43.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:42 vm08 bash[46122]: audit 2026-03-09T18:43:42.491679+0000 mon.a (mon.0) 1400 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:43.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:42 vm08 bash[46122]: audit 2026-03-09T18:43:42.492829+0000 mon.a (mon.0) 1401 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:43.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:42 vm08 bash[46122]: audit 2026-03-09T18:43:42.492829+0000 mon.a (mon.0) 1401 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:43:43.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:42 vm08 bash[46122]: audit 2026-03-09T18:43:42.493539+0000 mon.a (mon.0) 1402 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T18:43:43.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:42 vm08 bash[46122]: audit 2026-03-09T18:43:42.493539+0000 mon.a (mon.0) 1402 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T18:43:43.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:42 vm08 bash[46122]: audit 2026-03-09T18:43:42.494006+0000 mon.a (mon.0) 1403 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["a"]}]: dispatch 2026-03-09T18:43:43.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:42 vm08 bash[46122]: audit 2026-03-09T18:43:42.494006+0000 mon.a (mon.0) 1403 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["a"]}]: dispatch 2026-03-09T18:43:43.549 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:43.549 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:43 vm00 systemd[1]: Stopping Ceph mon.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:43:43.549 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:43.549 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:43.550 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:43:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:43.550 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:43:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:43.550 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:43:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:43.550 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:43:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:43.550 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:43:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:43.550 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:43:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:43.826 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:43 vm00 bash[17468]: debug 2026-03-09T18:43:43.584+0000 7fd64f0ef700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T18:43:43.826 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:43 vm00 bash[17468]: debug 2026-03-09T18:43:43.584+0000 7fd64f0ef700 -1 mon.a@0(leader) e3 *** Got Signal Terminated *** 2026-03-09T18:43:43.826 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:43 vm00 bash[69397]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-mon-a 2026-03-09T18:43:43.826 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:43 vm00 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mon.a.service: Deactivated successfully. 2026-03-09T18:43:43.826 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:43 vm00 systemd[1]: Stopped Ceph mon.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:43:43.826 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:43 vm00 bash[53976]: [09/Mar/2026:18:43:43] ENGINE Bus STOPPING 2026-03-09T18:43:44.076 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:44.076 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:43:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:44.076 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:43:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:44.076 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:44.076 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:44 vm00 bash[53976]: [09/Mar/2026:18:43:44] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T18:43:44.076 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:44 vm00 bash[53976]: [09/Mar/2026:18:43:44] ENGINE Bus STOPPED 2026-03-09T18:43:44.076 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:44 vm00 bash[53976]: [09/Mar/2026:18:43:44] ENGINE Bus STARTING 2026-03-09T18:43:44.076 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:43:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:44.076 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:44.076 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:43 vm00 systemd[1]: Started Ceph mon.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:43:44.076 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.048+0000 7fc8d9a3dd80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.048+0000 7fc8d9a3dd80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.048+0000 7fc8d9a3dd80 0 pidfile_write: ignore empty --pid-file 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 0 load: jerasure load: lrc 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: RocksDB version: 7.9.2 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Git sha 0 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: DB SUMMARY 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: DB Session ID: OH62SINEYZ7BIXZDY6AT 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: CURRENT file: CURRENT 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: MANIFEST file: MANIFEST-000015 size: 2139 Bytes 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000048.sst 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000046.log size: 2846551 ; 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.error_if_exists: 0 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.create_if_missing: 0 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.env: 0x558977941dc0 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.info_log: 0x5589b687b7e0 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.statistics: (nil) 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.use_fsync: 0 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.db_log_dir: 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.wal_dir: 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.write_buffer_manager: 0x5589b687f900 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T18:43:44.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.unordered_write: 0 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.row_cache: None 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.wal_filter: None 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.two_write_queues: 0 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.wal_compression: 0 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.atomic_flush: 0 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.max_open_files: -1 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Compression algorithms supported: 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: kZSTD supported: 0 2026-03-09T18:43:44.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: kXpressCompression supported: 0 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: kZlibCompression supported: 1 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000015 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.merge_operator: 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.compaction_filter: None 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5589b687a320) 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: cache_index_and_filter_blocks: 1 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: pin_top_level_index_and_filter: 1 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: index_type: 0 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: data_block_index_type: 0 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: index_shortening: 1 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: data_block_hash_table_util_ratio: 0.750000 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: checksum: 4 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: no_block_cache: 0 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: block_cache: 0x5589b68a1350 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: block_cache_name: BinnedLRUCache 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: block_cache_options: 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: capacity : 536870912 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: num_shard_bits : 4 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: strict_capacity_limit : 0 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: high_pri_pool_ratio: 0.000 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: block_cache_compressed: (nil) 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: persistent_cache: (nil) 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: block_size: 4096 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: block_size_deviation: 10 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: block_restart_interval: 16 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: index_block_restart_interval: 1 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: metadata_block_size: 4096 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: partition_filters: 0 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: use_delta_encoding: 1 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: filter_policy: bloomfilter 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: whole_key_filtering: 1 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: verify_compression: 0 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: read_amp_bytes_per_bit: 0 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: format_version: 5 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: enable_index_compression: 1 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: block_align: 0 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: max_auto_readahead_size: 262144 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: prepopulate_block_cache: 0 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: initial_auto_readahead_size: 8192 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: num_file_reads_for_auto_readahead: 2 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.compression: NoCompression 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T18:43:44.079 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.num_levels: 7 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T18:43:44.080 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.bloom_locality: 0 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.ttl: 2592000 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.enable_blob_files: false 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.min_blob_size: 0 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.052+0000 7fc8d9a3dd80 3 rocksdb: [table/block_based/block_based_table_reader.cc:721] At least one SST file opened without unique ID to verify: 48.sst 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.060+0000 7fc8d9a3dd80 4 rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.060+0000 7fc8d9a3dd80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000015 succeeded,manifest_file_number is 15, next_file_number is 50, last_sequence is 21827, log_number is 46,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.060+0000 7fc8d9a3dd80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 46 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.060+0000 7fc8d9a3dd80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2e21df9f-6d93-41b1-8998-7924d2dfcd8c 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.060+0000 7fc8d9a3dd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773081824061891, "job": 1, "event": "recovery_started", "wal_files": [46]} 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.060+0000 7fc8d9a3dd80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #46 mode 2 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.072+0000 7fc8d9a3dd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773081824075776, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 51, "file_size": 2557199, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21828, "largest_seqno": 24080, "table_properties": {"data_size": 2548139, "index_size": 5523, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 25024, "raw_average_key_size": 25, "raw_value_size": 2527312, "raw_average_value_size": 2552, "num_data_blocks": 251, "num_entries": 990, "num_filter_entries": 990, "num_deletions": 10, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773081824, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2e21df9f-6d93-41b1-8998-7924d2dfcd8c", "db_session_id": "OH62SINEYZ7BIXZDY6AT", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}} 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.072+0000 7fc8d9a3dd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773081824075885, "job": 1, "event": "recovery_finished"} 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.072+0000 7fc8d9a3dd80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 53 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.072+0000 7fc8d9a3dd80 4 rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.076+0000 7fc8d9a3dd80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.076+0000 7fc8d9a3dd80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5589b68a2e00 2026-03-09T18:43:44.081 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.076+0000 7fc8d9a3dd80 4 rocksdb: DB pointer 0x5589b69ae000 2026-03-09T18:43:44.081 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:43:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:44.081 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:43:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:44.081 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:43:43 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:43:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.080+0000 7fc8d9a3dd80 0 starting mon.a rank 0 at public addrs [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] at bind addrs [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon_data /var/lib/ceph/mon/ceph-a fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:43:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.080+0000 7fc8d9a3dd80 1 mon.a@-1(???) e3 preinit fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:43:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.080+0000 7fc8d9a3dd80 5 mon.a@-1(???).mds e0 Unable to load 'last_metadata' 2026-03-09T18:43:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.080+0000 7fc8d9a3dd80 5 mon.a@-1(???).mds e0 Unable to load 'last_metadata' 2026-03-09T18:43:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.080+0000 7fc8d9a3dd80 0 mon.a@-1(???).mds e1 new map 2026-03-09T18:43:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.080+0000 7fc8d9a3dd80 0 mon.a@-1(???).mds e1 print_map 2026-03-09T18:43:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: e1 2026-03-09T18:43:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: btime 1970-01-01T00:00:00:000000+0000 2026-03-09T18:43:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T18:43:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T18:43:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: legacy client fscid: -1 2026-03-09T18:43:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: 2026-03-09T18:43:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: No filesystems configured 2026-03-09T18:43:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.080+0000 7fc8d9a3dd80 0 mon.a@-1(???).osd e100 crush map has features 3314933000854323200, adjusting msgr requires 2026-03-09T18:43:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.080+0000 7fc8d9a3dd80 0 mon.a@-1(???).osd e100 crush map has features 432629239337189376, adjusting msgr requires 2026-03-09T18:43:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.080+0000 7fc8d9a3dd80 0 mon.a@-1(???).osd e100 crush map has features 432629239337189376, adjusting msgr requires 2026-03-09T18:43:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.080+0000 7fc8d9a3dd80 0 mon.a@-1(???).osd e100 crush map has features 432629239337189376, adjusting msgr requires 2026-03-09T18:43:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.080+0000 7fc8d9a3dd80 1 mon.a@-1(???).paxosservice(auth 1..26) refresh upgraded, format 0 -> 3 2026-03-09T18:43:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.080+0000 7fc8d9a3dd80 4 mon.a@-1(???).mgr e0 loading version 42 2026-03-09T18:43:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.080+0000 7fc8d9a3dd80 4 mon.a@-1(???).mgr e42 active server: [v2:192.168.123.100:6800/4136601387,v1:192.168.123.100:6801/4136601387](24991) 2026-03-09T18:43:44.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:44 vm00 bash[69512]: debug 2026-03-09T18:43:44.080+0000 7fc8d9a3dd80 4 mon.a@-1(???).mgr e42 mkfs or daemon transitioned to available, loading commands 2026-03-09T18:43:44.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:44 vm00 bash[53976]: [09/Mar/2026:18:43:44] ENGINE Serving on http://:::9283 2026-03-09T18:43:44.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:44 vm00 bash[53976]: [09/Mar/2026:18:43:44] ENGINE Bus STARTED 2026-03-09T18:43:44.646 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:44 vm00 bash[53976]: ignoring --setuser ceph since I am not root 2026-03-09T18:43:44.646 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:44 vm00 bash[53976]: ignoring --setgroup ceph since I am not root 2026-03-09T18:43:44.646 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:44 vm00 bash[53976]: debug 2026-03-09T18:43:44.400+0000 7f994c6b9640 1 -- 192.168.123.100:0/2766892965 <== mon.2 v2:192.168.123.108:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x55b1302104e0 con 0x55b1301ef800 2026-03-09T18:43:44.647 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:44 vm00 bash[53976]: debug 2026-03-09T18:43:44.400+0000 7f994c6b9640 1 -- 192.168.123.100:0/2766892965 <== mon.2 v2:192.168.123.108:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x55b1301ed4a0 con 0x55b1301ef800 2026-03-09T18:43:44.647 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:44 vm00 bash[53976]: debug 2026-03-09T18:43:44.468+0000 7f994ef16140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T18:43:44.647 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:44 vm00 bash[53976]: debug 2026-03-09T18:43:44.508+0000 7f994ef16140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T18:43:44.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:44 vm08 bash[43582]: ignoring --setuser ceph since I am not root 2026-03-09T18:43:44.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:44 vm08 bash[43582]: ignoring --setgroup ceph since I am not root 2026-03-09T18:43:44.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:44 vm08 bash[43582]: debug 2026-03-09T18:43:44.451+0000 7effab9d7140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T18:43:44.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:44 vm08 bash[43582]: debug 2026-03-09T18:43:44.495+0000 7effab9d7140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T18:43:44.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:44 vm08 bash[43582]: debug 2026-03-09T18:43:44.623+0000 7effab9d7140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T18:43:44.964 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:44 vm00 bash[53976]: debug 2026-03-09T18:43:44.648+0000 7f994ef16140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T18:43:45.224 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:44 vm08 bash[43582]: debug 2026-03-09T18:43:44.939+0000 7effab9d7140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T18:43:45.307 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:44 vm00 bash[53976]: debug 2026-03-09T18:43:44.964+0000 7f994ef16140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T18:43:45.585 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.309185+0000 mon.b (mon.2) 5 : cluster [INF] mon.b calling monitor election 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.309185+0000 mon.b (mon.2) 5 : cluster [INF] mon.b calling monitor election 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.310277+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.310277+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.310832+0000 mon.a (mon.0) 16 : cluster [INF] mon.a calling monitor election 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.310832+0000 mon.a (mon.0) 16 : cluster [INF] mon.a calling monitor election 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: audit 2026-03-09T18:43:44.311417+0000 mon.b (mon.2) 6 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: audit 2026-03-09T18:43:44.311417+0000 mon.b (mon.2) 6 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: audit 2026-03-09T18:43:44.311708+0000 mon.b (mon.2) 7 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: audit 2026-03-09T18:43:44.311708+0000 mon.b (mon.2) 7 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: audit 2026-03-09T18:43:44.312040+0000 mon.b (mon.2) 8 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: audit 2026-03-09T18:43:44.312040+0000 mon.b (mon.2) 8 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.315660+0000 mon.a (mon.0) 17 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.315660+0000 mon.a (mon.0) 17 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.320090+0000 mon.a (mon.0) 18 : cluster [DBG] monmap epoch 4 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.320090+0000 mon.a (mon.0) 18 : cluster [DBG] monmap epoch 4 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.320153+0000 mon.a (mon.0) 19 : cluster [DBG] fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.320153+0000 mon.a (mon.0) 19 : cluster [DBG] fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.320229+0000 mon.a (mon.0) 20 : cluster [DBG] last_changed 2026-03-09T18:43:44.301636+0000 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.320229+0000 mon.a (mon.0) 20 : cluster [DBG] last_changed 2026-03-09T18:43:44.301636+0000 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.320279+0000 mon.a (mon.0) 21 : cluster [DBG] created 2026-03-09T18:19:13.682656+0000 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.320279+0000 mon.a (mon.0) 21 : cluster [DBG] created 2026-03-09T18:19:13.682656+0000 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.320320+0000 mon.a (mon.0) 22 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.320320+0000 mon.a (mon.0) 22 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.320372+0000 mon.a (mon.0) 23 : cluster [DBG] election_strategy: 1 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.320372+0000 mon.a (mon.0) 23 : cluster [DBG] election_strategy: 1 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.320415+0000 mon.a (mon.0) 24 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.320415+0000 mon.a (mon.0) 24 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.320457+0000 mon.a (mon.0) 25 : cluster [DBG] 1: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.320457+0000 mon.a (mon.0) 25 : cluster [DBG] 1: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.320516+0000 mon.a (mon.0) 26 : cluster [DBG] 2: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.b 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.320516+0000 mon.a (mon.0) 26 : cluster [DBG] 2: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.b 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.320915+0000 mon.a (mon.0) 27 : cluster [DBG] fsmap 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.320915+0000 mon.a (mon.0) 27 : cluster [DBG] fsmap 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.321001+0000 mon.a (mon.0) 28 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T18:43:45.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.321001+0000 mon.a (mon.0) 28 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.322334+0000 mon.a (mon.0) 29 : cluster [DBG] mgrmap e42: y(active, since 2m), standbys: x 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.322334+0000 mon.a (mon.0) 29 : cluster [DBG] mgrmap e42: y(active, since 2m), standbys: x 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.322928+0000 mon.a (mon.0) 30 : cluster [INF] overall HEALTH_OK 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.322928+0000 mon.a (mon.0) 30 : cluster [INF] overall HEALTH_OK 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: audit 2026-03-09T18:43:44.334304+0000 mon.a (mon.0) 31 : audit [INF] from='mgr.24991 ' entity='' 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: audit 2026-03-09T18:43:44.334304+0000 mon.a (mon.0) 31 : audit [INF] from='mgr.24991 ' entity='' 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.334495+0000 mon.a (mon.0) 32 : cluster [DBG] mgrmap e43: y(active, since 2m), standbys: x 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:45 vm00 bash[65531]: cluster 2026-03-09T18:43:44.334495+0000 mon.a (mon.0) 32 : cluster [DBG] mgrmap e43: y(active, since 2m), standbys: x 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.309185+0000 mon.b (mon.2) 5 : cluster [INF] mon.b calling monitor election 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.309185+0000 mon.b (mon.2) 5 : cluster [INF] mon.b calling monitor election 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.310277+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.310277+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.310832+0000 mon.a (mon.0) 16 : cluster [INF] mon.a calling monitor election 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.310832+0000 mon.a (mon.0) 16 : cluster [INF] mon.a calling monitor election 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: audit 2026-03-09T18:43:44.311417+0000 mon.b (mon.2) 6 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: audit 2026-03-09T18:43:44.311417+0000 mon.b (mon.2) 6 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: audit 2026-03-09T18:43:44.311708+0000 mon.b (mon.2) 7 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: audit 2026-03-09T18:43:44.311708+0000 mon.b (mon.2) 7 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: audit 2026-03-09T18:43:44.312040+0000 mon.b (mon.2) 8 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: audit 2026-03-09T18:43:44.312040+0000 mon.b (mon.2) 8 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.315660+0000 mon.a (mon.0) 17 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.315660+0000 mon.a (mon.0) 17 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.320090+0000 mon.a (mon.0) 18 : cluster [DBG] monmap epoch 4 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.320090+0000 mon.a (mon.0) 18 : cluster [DBG] monmap epoch 4 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.320153+0000 mon.a (mon.0) 19 : cluster [DBG] fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.320153+0000 mon.a (mon.0) 19 : cluster [DBG] fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.320229+0000 mon.a (mon.0) 20 : cluster [DBG] last_changed 2026-03-09T18:43:44.301636+0000 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.320229+0000 mon.a (mon.0) 20 : cluster [DBG] last_changed 2026-03-09T18:43:44.301636+0000 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.320279+0000 mon.a (mon.0) 21 : cluster [DBG] created 2026-03-09T18:19:13.682656+0000 2026-03-09T18:43:45.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.320279+0000 mon.a (mon.0) 21 : cluster [DBG] created 2026-03-09T18:19:13.682656+0000 2026-03-09T18:43:45.588 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.320320+0000 mon.a (mon.0) 22 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T18:43:45.588 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.320320+0000 mon.a (mon.0) 22 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T18:43:45.588 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.320372+0000 mon.a (mon.0) 23 : cluster [DBG] election_strategy: 1 2026-03-09T18:43:45.588 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.320372+0000 mon.a (mon.0) 23 : cluster [DBG] election_strategy: 1 2026-03-09T18:43:45.588 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.320415+0000 mon.a (mon.0) 24 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T18:43:45.588 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.320415+0000 mon.a (mon.0) 24 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T18:43:45.588 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.320457+0000 mon.a (mon.0) 25 : cluster [DBG] 1: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-09T18:43:45.588 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.320457+0000 mon.a (mon.0) 25 : cluster [DBG] 1: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-09T18:43:45.588 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.320516+0000 mon.a (mon.0) 26 : cluster [DBG] 2: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.b 2026-03-09T18:43:45.588 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.320516+0000 mon.a (mon.0) 26 : cluster [DBG] 2: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.b 2026-03-09T18:43:45.588 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.320915+0000 mon.a (mon.0) 27 : cluster [DBG] fsmap 2026-03-09T18:43:45.588 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.320915+0000 mon.a (mon.0) 27 : cluster [DBG] fsmap 2026-03-09T18:43:45.588 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.321001+0000 mon.a (mon.0) 28 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T18:43:45.588 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.321001+0000 mon.a (mon.0) 28 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T18:43:45.588 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.322334+0000 mon.a (mon.0) 29 : cluster [DBG] mgrmap e42: y(active, since 2m), standbys: x 2026-03-09T18:43:45.588 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.322334+0000 mon.a (mon.0) 29 : cluster [DBG] mgrmap e42: y(active, since 2m), standbys: x 2026-03-09T18:43:45.588 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.322928+0000 mon.a (mon.0) 30 : cluster [INF] overall HEALTH_OK 2026-03-09T18:43:45.588 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.322928+0000 mon.a (mon.0) 30 : cluster [INF] overall HEALTH_OK 2026-03-09T18:43:45.588 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: audit 2026-03-09T18:43:44.334304+0000 mon.a (mon.0) 31 : audit [INF] from='mgr.24991 ' entity='' 2026-03-09T18:43:45.588 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: audit 2026-03-09T18:43:44.334304+0000 mon.a (mon.0) 31 : audit [INF] from='mgr.24991 ' entity='' 2026-03-09T18:43:45.588 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.334495+0000 mon.a (mon.0) 32 : cluster [DBG] mgrmap e43: y(active, since 2m), standbys: x 2026-03-09T18:43:45.588 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:45 vm00 bash[69512]: cluster 2026-03-09T18:43:44.334495+0000 mon.a (mon.0) 32 : cluster [DBG] mgrmap e43: y(active, since 2m), standbys: x 2026-03-09T18:43:45.588 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:45 vm00 bash[53976]: debug 2026-03-09T18:43:45.492+0000 7f994ef16140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T18:43:45.645 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.309185+0000 mon.b (mon.2) 5 : cluster [INF] mon.b calling monitor election 2026-03-09T18:43:45.645 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.309185+0000 mon.b (mon.2) 5 : cluster [INF] mon.b calling monitor election 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.310277+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.310277+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.310832+0000 mon.a (mon.0) 16 : cluster [INF] mon.a calling monitor election 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.310832+0000 mon.a (mon.0) 16 : cluster [INF] mon.a calling monitor election 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: audit 2026-03-09T18:43:44.311417+0000 mon.b (mon.2) 6 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: audit 2026-03-09T18:43:44.311417+0000 mon.b (mon.2) 6 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: audit 2026-03-09T18:43:44.311708+0000 mon.b (mon.2) 7 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: audit 2026-03-09T18:43:44.311708+0000 mon.b (mon.2) 7 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: audit 2026-03-09T18:43:44.312040+0000 mon.b (mon.2) 8 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: audit 2026-03-09T18:43:44.312040+0000 mon.b (mon.2) 8 : audit [DBG] from='mgr.24991 192.168.123.100:0/2678537298' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.315660+0000 mon.a (mon.0) 17 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.315660+0000 mon.a (mon.0) 17 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.320090+0000 mon.a (mon.0) 18 : cluster [DBG] monmap epoch 4 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.320090+0000 mon.a (mon.0) 18 : cluster [DBG] monmap epoch 4 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.320153+0000 mon.a (mon.0) 19 : cluster [DBG] fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.320153+0000 mon.a (mon.0) 19 : cluster [DBG] fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.320229+0000 mon.a (mon.0) 20 : cluster [DBG] last_changed 2026-03-09T18:43:44.301636+0000 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.320229+0000 mon.a (mon.0) 20 : cluster [DBG] last_changed 2026-03-09T18:43:44.301636+0000 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.320279+0000 mon.a (mon.0) 21 : cluster [DBG] created 2026-03-09T18:19:13.682656+0000 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.320279+0000 mon.a (mon.0) 21 : cluster [DBG] created 2026-03-09T18:19:13.682656+0000 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.320320+0000 mon.a (mon.0) 22 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.320320+0000 mon.a (mon.0) 22 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.320372+0000 mon.a (mon.0) 23 : cluster [DBG] election_strategy: 1 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.320372+0000 mon.a (mon.0) 23 : cluster [DBG] election_strategy: 1 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.320415+0000 mon.a (mon.0) 24 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.320415+0000 mon.a (mon.0) 24 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.320457+0000 mon.a (mon.0) 25 : cluster [DBG] 1: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.320457+0000 mon.a (mon.0) 25 : cluster [DBG] 1: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.320516+0000 mon.a (mon.0) 26 : cluster [DBG] 2: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.b 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.320516+0000 mon.a (mon.0) 26 : cluster [DBG] 2: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.b 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.320915+0000 mon.a (mon.0) 27 : cluster [DBG] fsmap 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.320915+0000 mon.a (mon.0) 27 : cluster [DBG] fsmap 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.321001+0000 mon.a (mon.0) 28 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.321001+0000 mon.a (mon.0) 28 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.322334+0000 mon.a (mon.0) 29 : cluster [DBG] mgrmap e42: y(active, since 2m), standbys: x 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.322334+0000 mon.a (mon.0) 29 : cluster [DBG] mgrmap e42: y(active, since 2m), standbys: x 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.322928+0000 mon.a (mon.0) 30 : cluster [INF] overall HEALTH_OK 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.322928+0000 mon.a (mon.0) 30 : cluster [INF] overall HEALTH_OK 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: audit 2026-03-09T18:43:44.334304+0000 mon.a (mon.0) 31 : audit [INF] from='mgr.24991 ' entity='' 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: audit 2026-03-09T18:43:44.334304+0000 mon.a (mon.0) 31 : audit [INF] from='mgr.24991 ' entity='' 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.334495+0000 mon.a (mon.0) 32 : cluster [DBG] mgrmap e43: y(active, since 2m), standbys: x 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:45 vm08 bash[46122]: cluster 2026-03-09T18:43:44.334495+0000 mon.a (mon.0) 32 : cluster [DBG] mgrmap e43: y(active, since 2m), standbys: x 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:45 vm08 bash[43582]: debug 2026-03-09T18:43:45.435+0000 7effab9d7140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T18:43:45.646 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:45 vm08 bash[43582]: debug 2026-03-09T18:43:45.519+0000 7effab9d7140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T18:43:45.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:45 vm00 bash[53976]: debug 2026-03-09T18:43:45.584+0000 7f994ef16140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T18:43:45.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:45 vm00 bash[53976]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T18:43:45.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:45 vm00 bash[53976]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T18:43:45.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:45 vm00 bash[53976]: from numpy import show_config as show_numpy_config 2026-03-09T18:43:45.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:45 vm00 bash[53976]: debug 2026-03-09T18:43:45.728+0000 7f994ef16140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T18:43:45.896 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:45 vm08 bash[43582]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T18:43:45.896 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:45 vm08 bash[43582]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T18:43:45.897 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:45 vm08 bash[43582]: from numpy import show_config as show_numpy_config 2026-03-09T18:43:45.897 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:45 vm08 bash[43582]: debug 2026-03-09T18:43:45.647+0000 7effab9d7140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T18:43:45.897 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:45 vm08 bash[43582]: debug 2026-03-09T18:43:45.815+0000 7effab9d7140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T18:43:45.897 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:45 vm08 bash[43582]: debug 2026-03-09T18:43:45.855+0000 7effab9d7140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T18:43:46.225 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:45 vm08 bash[43582]: debug 2026-03-09T18:43:45.895+0000 7effab9d7140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T18:43:46.225 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:45 vm08 bash[43582]: debug 2026-03-09T18:43:45.935+0000 7effab9d7140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T18:43:46.225 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:45 vm08 bash[43582]: debug 2026-03-09T18:43:45.987+0000 7effab9d7140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T18:43:46.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:45 vm00 bash[53976]: debug 2026-03-09T18:43:45.904+0000 7f994ef16140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T18:43:46.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:45 vm00 bash[53976]: debug 2026-03-09T18:43:45.944+0000 7f994ef16140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T18:43:46.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:45 vm00 bash[53976]: debug 2026-03-09T18:43:45.984+0000 7f994ef16140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T18:43:46.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:46 vm00 bash[53976]: debug 2026-03-09T18:43:46.032+0000 7f994ef16140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T18:43:46.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:46 vm00 bash[53976]: debug 2026-03-09T18:43:46.088+0000 7f994ef16140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T18:43:46.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:46 vm08 bash[43582]: debug 2026-03-09T18:43:46.471+0000 7effab9d7140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T18:43:46.725 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:46 vm08 bash[43582]: debug 2026-03-09T18:43:46.511+0000 7effab9d7140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T18:43:46.725 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:46 vm08 bash[43582]: debug 2026-03-09T18:43:46.551+0000 7effab9d7140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T18:43:46.725 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:46 vm08 bash[43582]: debug 2026-03-09T18:43:46.715+0000 7effab9d7140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T18:43:46.855 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:46 vm00 bash[53976]: debug 2026-03-09T18:43:46.592+0000 7f994ef16140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T18:43:46.855 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:46 vm00 bash[53976]: debug 2026-03-09T18:43:46.636+0000 7f994ef16140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T18:43:46.855 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:46 vm00 bash[53976]: debug 2026-03-09T18:43:46.676+0000 7f994ef16140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T18:43:47.114 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:46 vm08 bash[43582]: debug 2026-03-09T18:43:46.767+0000 7effab9d7140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T18:43:47.114 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:46 vm08 bash[43582]: debug 2026-03-09T18:43:46.811+0000 7effab9d7140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T18:43:47.114 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:46 vm08 bash[43582]: debug 2026-03-09T18:43:46.939+0000 7effab9d7140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:43:47.129 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:46 vm00 bash[53976]: debug 2026-03-09T18:43:46.856+0000 7f994ef16140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T18:43:47.129 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:46 vm00 bash[53976]: debug 2026-03-09T18:43:46.908+0000 7f994ef16140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T18:43:47.129 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:46 vm00 bash[53976]: debug 2026-03-09T18:43:46.956+0000 7f994ef16140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T18:43:47.129 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:47 vm00 bash[53976]: debug 2026-03-09T18:43:47.080+0000 7f994ef16140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:43:47.393 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:47 vm08 bash[43582]: debug 2026-03-09T18:43:47.111+0000 7effab9d7140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T18:43:47.393 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:47 vm08 bash[43582]: debug 2026-03-09T18:43:47.303+0000 7effab9d7140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T18:43:47.393 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:47 vm08 bash[43582]: debug 2026-03-09T18:43:47.343+0000 7effab9d7140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T18:43:47.523 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:47 vm00 bash[53976]: debug 2026-03-09T18:43:47.268+0000 7f994ef16140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T18:43:47.523 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:47 vm00 bash[53976]: debug 2026-03-09T18:43:47.484+0000 7f994ef16140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T18:43:47.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:47 vm08 bash[43582]: debug 2026-03-09T18:43:47.391+0000 7effab9d7140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T18:43:47.724 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:47 vm08 bash[43582]: debug 2026-03-09T18:43:47.555+0000 7effab9d7140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:43:47.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:47 vm00 bash[53976]: debug 2026-03-09T18:43:47.524+0000 7f994ef16140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T18:43:47.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:47 vm00 bash[53976]: debug 2026-03-09T18:43:47.572+0000 7f994ef16140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T18:43:47.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:47 vm00 bash[53976]: debug 2026-03-09T18:43:47.736+0000 7f994ef16140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:43:48.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:47 vm08 bash[46122]: cluster 2026-03-09T18:43:47.818073+0000 mon.a (mon.0) 33 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T18:43:48.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:47 vm08 bash[46122]: cluster 2026-03-09T18:43:47.818073+0000 mon.a (mon.0) 33 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T18:43:48.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:47 vm08 bash[46122]: cluster 2026-03-09T18:43:47.818177+0000 mon.a (mon.0) 34 : cluster [DBG] Standby manager daemon x started 2026-03-09T18:43:48.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:47 vm08 bash[46122]: cluster 2026-03-09T18:43:47.818177+0000 mon.a (mon.0) 34 : cluster [DBG] Standby manager daemon x started 2026-03-09T18:43:48.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:47 vm08 bash[46122]: audit 2026-03-09T18:43:47.819571+0000 mon.b (mon.2) 9 : audit [DBG] from='mgr.? 192.168.123.108:0/3380759506' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:43:48.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:47 vm08 bash[46122]: audit 2026-03-09T18:43:47.819571+0000 mon.b (mon.2) 9 : audit [DBG] from='mgr.? 192.168.123.108:0/3380759506' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:43:48.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:47 vm08 bash[46122]: audit 2026-03-09T18:43:47.820771+0000 mon.b (mon.2) 10 : audit [DBG] from='mgr.? 192.168.123.108:0/3380759506' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:43:48.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:47 vm08 bash[46122]: audit 2026-03-09T18:43:47.820771+0000 mon.b (mon.2) 10 : audit [DBG] from='mgr.? 192.168.123.108:0/3380759506' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:43:48.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:47 vm08 bash[46122]: audit 2026-03-09T18:43:47.821589+0000 mon.b (mon.2) 11 : audit [DBG] from='mgr.? 192.168.123.108:0/3380759506' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:43:48.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:47 vm08 bash[46122]: audit 2026-03-09T18:43:47.821589+0000 mon.b (mon.2) 11 : audit [DBG] from='mgr.? 192.168.123.108:0/3380759506' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:43:48.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:47 vm08 bash[46122]: audit 2026-03-09T18:43:47.821944+0000 mon.b (mon.2) 12 : audit [DBG] from='mgr.? 192.168.123.108:0/3380759506' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:43:48.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:47 vm08 bash[46122]: audit 2026-03-09T18:43:47.821944+0000 mon.b (mon.2) 12 : audit [DBG] from='mgr.? 192.168.123.108:0/3380759506' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:43:48.225 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:47 vm08 bash[43582]: debug 2026-03-09T18:43:47.807+0000 7effab9d7140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T18:43:48.225 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:47 vm08 bash[43582]: [09/Mar/2026:18:43:47] ENGINE Bus STARTING 2026-03-09T18:43:48.225 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:47 vm08 bash[43582]: CherryPy Checker: 2026-03-09T18:43:48.225 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:47 vm08 bash[43582]: The Application mounted at '' has an empty config. 2026-03-09T18:43:48.225 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:47 vm08 bash[43582]: [09/Mar/2026:18:43:47] ENGINE Serving on http://:::9283 2026-03-09T18:43:48.225 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:43:47 vm08 bash[43582]: [09/Mar/2026:18:43:47] ENGINE Bus STARTED 2026-03-09T18:43:48.312 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:47 vm00 bash[65531]: cluster 2026-03-09T18:43:47.818073+0000 mon.a (mon.0) 33 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T18:43:48.312 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:47 vm00 bash[65531]: cluster 2026-03-09T18:43:47.818073+0000 mon.a (mon.0) 33 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T18:43:48.312 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:47 vm00 bash[65531]: cluster 2026-03-09T18:43:47.818177+0000 mon.a (mon.0) 34 : cluster [DBG] Standby manager daemon x started 2026-03-09T18:43:48.312 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:47 vm00 bash[65531]: cluster 2026-03-09T18:43:47.818177+0000 mon.a (mon.0) 34 : cluster [DBG] Standby manager daemon x started 2026-03-09T18:43:48.312 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:47 vm00 bash[65531]: audit 2026-03-09T18:43:47.819571+0000 mon.b (mon.2) 9 : audit [DBG] from='mgr.? 192.168.123.108:0/3380759506' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:43:48.312 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:47 vm00 bash[65531]: audit 2026-03-09T18:43:47.819571+0000 mon.b (mon.2) 9 : audit [DBG] from='mgr.? 192.168.123.108:0/3380759506' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:43:48.312 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:47 vm00 bash[65531]: audit 2026-03-09T18:43:47.820771+0000 mon.b (mon.2) 10 : audit [DBG] from='mgr.? 192.168.123.108:0/3380759506' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:43:48.312 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:47 vm00 bash[65531]: audit 2026-03-09T18:43:47.820771+0000 mon.b (mon.2) 10 : audit [DBG] from='mgr.? 192.168.123.108:0/3380759506' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:43:48.312 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:47 vm00 bash[65531]: audit 2026-03-09T18:43:47.821589+0000 mon.b (mon.2) 11 : audit [DBG] from='mgr.? 192.168.123.108:0/3380759506' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:43:48.312 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:47 vm00 bash[65531]: audit 2026-03-09T18:43:47.821589+0000 mon.b (mon.2) 11 : audit [DBG] from='mgr.? 192.168.123.108:0/3380759506' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:43:48.312 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:47 vm00 bash[65531]: audit 2026-03-09T18:43:47.821944+0000 mon.b (mon.2) 12 : audit [DBG] from='mgr.? 192.168.123.108:0/3380759506' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:43:48.312 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:47 vm00 bash[65531]: audit 2026-03-09T18:43:47.821944+0000 mon.b (mon.2) 12 : audit [DBG] from='mgr.? 192.168.123.108:0/3380759506' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:43:48.312 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:47 vm00 bash[69512]: cluster 2026-03-09T18:43:47.818073+0000 mon.a (mon.0) 33 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T18:43:48.312 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:47 vm00 bash[69512]: cluster 2026-03-09T18:43:47.818073+0000 mon.a (mon.0) 33 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T18:43:48.313 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:47 vm00 bash[69512]: cluster 2026-03-09T18:43:47.818177+0000 mon.a (mon.0) 34 : cluster [DBG] Standby manager daemon x started 2026-03-09T18:43:48.313 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:47 vm00 bash[69512]: cluster 2026-03-09T18:43:47.818177+0000 mon.a (mon.0) 34 : cluster [DBG] Standby manager daemon x started 2026-03-09T18:43:48.313 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:47 vm00 bash[69512]: audit 2026-03-09T18:43:47.819571+0000 mon.b (mon.2) 9 : audit [DBG] from='mgr.? 192.168.123.108:0/3380759506' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:43:48.313 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:47 vm00 bash[69512]: audit 2026-03-09T18:43:47.819571+0000 mon.b (mon.2) 9 : audit [DBG] from='mgr.? 192.168.123.108:0/3380759506' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T18:43:48.313 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:47 vm00 bash[69512]: audit 2026-03-09T18:43:47.820771+0000 mon.b (mon.2) 10 : audit [DBG] from='mgr.? 192.168.123.108:0/3380759506' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:43:48.313 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:47 vm00 bash[69512]: audit 2026-03-09T18:43:47.820771+0000 mon.b (mon.2) 10 : audit [DBG] from='mgr.? 192.168.123.108:0/3380759506' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:43:48.313 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:47 vm00 bash[69512]: audit 2026-03-09T18:43:47.821589+0000 mon.b (mon.2) 11 : audit [DBG] from='mgr.? 192.168.123.108:0/3380759506' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:43:48.313 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:47 vm00 bash[69512]: audit 2026-03-09T18:43:47.821589+0000 mon.b (mon.2) 11 : audit [DBG] from='mgr.? 192.168.123.108:0/3380759506' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T18:43:48.313 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:47 vm00 bash[69512]: audit 2026-03-09T18:43:47.821944+0000 mon.b (mon.2) 12 : audit [DBG] from='mgr.? 192.168.123.108:0/3380759506' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:43:48.313 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:47 vm00 bash[69512]: audit 2026-03-09T18:43:47.821944+0000 mon.b (mon.2) 12 : audit [DBG] from='mgr.? 192.168.123.108:0/3380759506' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:43:48.313 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:48 vm00 bash[53976]: debug 2026-03-09T18:43:48.012+0000 7f994ef16140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T18:43:48.313 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:48 vm00 bash[53976]: [09/Mar/2026:18:43:48] ENGINE Bus STARTING 2026-03-09T18:43:48.313 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:48 vm00 bash[53976]: CherryPy Checker: 2026-03-09T18:43:48.313 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:48 vm00 bash[53976]: The Application mounted at '' has an empty config. 2026-03-09T18:43:48.629 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:48 vm00 bash[53976]: [09/Mar/2026:18:43:48] ENGINE Serving on http://:::9283 2026-03-09T18:43:48.629 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:48 vm00 bash[53976]: [09/Mar/2026:18:43:48] ENGINE Bus STARTED 2026-03-09T18:43:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: cluster 2026-03-09T18:43:47.876835+0000 mon.a (mon.0) 35 : cluster [DBG] mgrmap e44: y(active, since 2m), standbys: x 2026-03-09T18:43:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: cluster 2026-03-09T18:43:47.876835+0000 mon.a (mon.0) 35 : cluster [DBG] mgrmap e44: y(active, since 2m), standbys: x 2026-03-09T18:43:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: cluster 2026-03-09T18:43:48.018601+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon y restarted 2026-03-09T18:43:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: cluster 2026-03-09T18:43:48.018601+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon y restarted 2026-03-09T18:43:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: cluster 2026-03-09T18:43:48.018924+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon y 2026-03-09T18:43:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: cluster 2026-03-09T18:43:48.018924+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon y 2026-03-09T18:43:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: cluster 2026-03-09T18:43:48.029517+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T18:43:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: cluster 2026-03-09T18:43:48.029517+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T18:43:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: cluster 2026-03-09T18:43:48.032498+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e45: y(active, starting, since 0.0136824s), standbys: x 2026-03-09T18:43:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: cluster 2026-03-09T18:43:48.032498+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e45: y(active, starting, since 0.0136824s), standbys: x 2026-03-09T18:43:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.047367+0000 mon.c (mon.1) 3 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:43:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.047367+0000 mon.c (mon.1) 3 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:43:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.047463+0000 mon.c (mon.1) 4 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:43:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.047463+0000 mon.c (mon.1) 4 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:43:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.047533+0000 mon.c (mon.1) 5 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:43:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.047533+0000 mon.c (mon.1) 5 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:43:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.049136+0000 mon.c (mon.1) 6 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:43:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.049136+0000 mon.c (mon.1) 6 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:43:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.049250+0000 mon.c (mon.1) 7 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:43:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.049250+0000 mon.c (mon.1) 7 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:43:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.050241+0000 mon.c (mon.1) 8 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:43:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.050241+0000 mon.c (mon.1) 8 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:43:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.050357+0000 mon.c (mon.1) 9 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:43:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.050357+0000 mon.c (mon.1) 9 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:43:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.050780+0000 mon.c (mon.1) 10 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:43:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.050780+0000 mon.c (mon.1) 10 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:43:49.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.050944+0000 mon.c (mon.1) 11 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.050944+0000 mon.c (mon.1) 11 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.051101+0000 mon.c (mon.1) 12 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.051101+0000 mon.c (mon.1) 12 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.051256+0000 mon.c (mon.1) 13 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.051256+0000 mon.c (mon.1) 13 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.051404+0000 mon.c (mon.1) 14 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.051404+0000 mon.c (mon.1) 14 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.051511+0000 mon.c (mon.1) 15 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.051511+0000 mon.c (mon.1) 15 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.051674+0000 mon.c (mon.1) 16 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.051674+0000 mon.c (mon.1) 16 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.051834+0000 mon.c (mon.1) 17 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.051834+0000 mon.c (mon.1) 17 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.051933+0000 mon.c (mon.1) 18 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.051933+0000 mon.c (mon.1) 18 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: cluster 2026-03-09T18:43:48.061126+0000 mon.a (mon.0) 40 : cluster [INF] Manager daemon y is now available 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: cluster 2026-03-09T18:43:48.061126+0000 mon.a (mon.0) 40 : cluster [INF] Manager daemon y is now available 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.093522+0000 mon.c (mon.1) 19 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.093522+0000 mon.c (mon.1) 19 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.099834+0000 mon.c (mon.1) 20 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.099834+0000 mon.c (mon.1) 20 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.120291+0000 mon.c (mon.1) 21 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.120291+0000 mon.c (mon.1) 21 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.120623+0000 mon.a (mon.0) 41 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.120623+0000 mon.a (mon.0) 41 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.173883+0000 mon.c (mon.1) 22 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.173883+0000 mon.c (mon.1) 22 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.174229+0000 mon.a (mon.0) 42 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:43:49.226 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:48 vm08 bash[46122]: audit 2026-03-09T18:43:48.174229+0000 mon.a (mon.0) 42 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:43:49.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: cluster 2026-03-09T18:43:47.876835+0000 mon.a (mon.0) 35 : cluster [DBG] mgrmap e44: y(active, since 2m), standbys: x 2026-03-09T18:43:49.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: cluster 2026-03-09T18:43:47.876835+0000 mon.a (mon.0) 35 : cluster [DBG] mgrmap e44: y(active, since 2m), standbys: x 2026-03-09T18:43:49.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: cluster 2026-03-09T18:43:48.018601+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon y restarted 2026-03-09T18:43:49.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: cluster 2026-03-09T18:43:48.018601+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon y restarted 2026-03-09T18:43:49.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: cluster 2026-03-09T18:43:48.018924+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon y 2026-03-09T18:43:49.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: cluster 2026-03-09T18:43:48.018924+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon y 2026-03-09T18:43:49.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: cluster 2026-03-09T18:43:48.029517+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T18:43:49.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: cluster 2026-03-09T18:43:48.029517+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T18:43:49.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: cluster 2026-03-09T18:43:48.032498+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e45: y(active, starting, since 0.0136824s), standbys: x 2026-03-09T18:43:49.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: cluster 2026-03-09T18:43:48.032498+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e45: y(active, starting, since 0.0136824s), standbys: x 2026-03-09T18:43:49.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.047367+0000 mon.c (mon.1) 3 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:43:49.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.047367+0000 mon.c (mon.1) 3 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:43:49.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.047463+0000 mon.c (mon.1) 4 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:43:49.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.047463+0000 mon.c (mon.1) 4 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:43:49.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.047533+0000 mon.c (mon.1) 5 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:43:49.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.047533+0000 mon.c (mon.1) 5 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:43:49.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.049136+0000 mon.c (mon.1) 6 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.049136+0000 mon.c (mon.1) 6 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.049250+0000 mon.c (mon.1) 7 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.049250+0000 mon.c (mon.1) 7 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.050241+0000 mon.c (mon.1) 8 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.050241+0000 mon.c (mon.1) 8 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.050357+0000 mon.c (mon.1) 9 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.050357+0000 mon.c (mon.1) 9 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.050780+0000 mon.c (mon.1) 10 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.050780+0000 mon.c (mon.1) 10 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.050944+0000 mon.c (mon.1) 11 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.050944+0000 mon.c (mon.1) 11 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.051101+0000 mon.c (mon.1) 12 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.051101+0000 mon.c (mon.1) 12 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.051256+0000 mon.c (mon.1) 13 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.051256+0000 mon.c (mon.1) 13 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.051404+0000 mon.c (mon.1) 14 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.051404+0000 mon.c (mon.1) 14 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.051511+0000 mon.c (mon.1) 15 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.051511+0000 mon.c (mon.1) 15 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.051674+0000 mon.c (mon.1) 16 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.051674+0000 mon.c (mon.1) 16 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.051834+0000 mon.c (mon.1) 17 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.051834+0000 mon.c (mon.1) 17 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.051933+0000 mon.c (mon.1) 18 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.051933+0000 mon.c (mon.1) 18 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: cluster 2026-03-09T18:43:48.061126+0000 mon.a (mon.0) 40 : cluster [INF] Manager daemon y is now available 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: cluster 2026-03-09T18:43:48.061126+0000 mon.a (mon.0) 40 : cluster [INF] Manager daemon y is now available 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.093522+0000 mon.c (mon.1) 19 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.093522+0000 mon.c (mon.1) 19 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.099834+0000 mon.c (mon.1) 20 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.099834+0000 mon.c (mon.1) 20 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.120291+0000 mon.c (mon.1) 21 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.120291+0000 mon.c (mon.1) 21 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.120623+0000 mon.a (mon.0) 41 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.120623+0000 mon.a (mon.0) 41 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.173883+0000 mon.c (mon.1) 22 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.173883+0000 mon.c (mon.1) 22 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.174229+0000 mon.a (mon.0) 42 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:48 vm00 bash[65531]: audit 2026-03-09T18:43:48.174229+0000 mon.a (mon.0) 42 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: cluster 2026-03-09T18:43:47.876835+0000 mon.a (mon.0) 35 : cluster [DBG] mgrmap e44: y(active, since 2m), standbys: x 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: cluster 2026-03-09T18:43:47.876835+0000 mon.a (mon.0) 35 : cluster [DBG] mgrmap e44: y(active, since 2m), standbys: x 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: cluster 2026-03-09T18:43:48.018601+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon y restarted 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: cluster 2026-03-09T18:43:48.018601+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon y restarted 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: cluster 2026-03-09T18:43:48.018924+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon y 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: cluster 2026-03-09T18:43:48.018924+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon y 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: cluster 2026-03-09T18:43:48.029517+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: cluster 2026-03-09T18:43:48.029517+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: cluster 2026-03-09T18:43:48.032498+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e45: y(active, starting, since 0.0136824s), standbys: x 2026-03-09T18:43:49.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: cluster 2026-03-09T18:43:48.032498+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e45: y(active, starting, since 0.0136824s), standbys: x 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.047367+0000 mon.c (mon.1) 3 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.047367+0000 mon.c (mon.1) 3 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.047463+0000 mon.c (mon.1) 4 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.047463+0000 mon.c (mon.1) 4 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.047533+0000 mon.c (mon.1) 5 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.047533+0000 mon.c (mon.1) 5 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.049136+0000 mon.c (mon.1) 6 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.049136+0000 mon.c (mon.1) 6 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.049250+0000 mon.c (mon.1) 7 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.049250+0000 mon.c (mon.1) 7 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.050241+0000 mon.c (mon.1) 8 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.050241+0000 mon.c (mon.1) 8 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.050357+0000 mon.c (mon.1) 9 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.050357+0000 mon.c (mon.1) 9 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.050780+0000 mon.c (mon.1) 10 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.050780+0000 mon.c (mon.1) 10 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.050944+0000 mon.c (mon.1) 11 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.050944+0000 mon.c (mon.1) 11 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.051101+0000 mon.c (mon.1) 12 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.051101+0000 mon.c (mon.1) 12 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.051256+0000 mon.c (mon.1) 13 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.051256+0000 mon.c (mon.1) 13 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.051404+0000 mon.c (mon.1) 14 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.051404+0000 mon.c (mon.1) 14 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.051511+0000 mon.c (mon.1) 15 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.051511+0000 mon.c (mon.1) 15 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.051674+0000 mon.c (mon.1) 16 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.051674+0000 mon.c (mon.1) 16 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.051834+0000 mon.c (mon.1) 17 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.051834+0000 mon.c (mon.1) 17 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.051933+0000 mon.c (mon.1) 18 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.051933+0000 mon.c (mon.1) 18 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: cluster 2026-03-09T18:43:48.061126+0000 mon.a (mon.0) 40 : cluster [INF] Manager daemon y is now available 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: cluster 2026-03-09T18:43:48.061126+0000 mon.a (mon.0) 40 : cluster [INF] Manager daemon y is now available 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.093522+0000 mon.c (mon.1) 19 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.093522+0000 mon.c (mon.1) 19 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.099834+0000 mon.c (mon.1) 20 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.099834+0000 mon.c (mon.1) 20 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.120291+0000 mon.c (mon.1) 21 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.120291+0000 mon.c (mon.1) 21 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.120623+0000 mon.a (mon.0) 41 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.120623+0000 mon.a (mon.0) 41 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.173883+0000 mon.c (mon.1) 22 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.173883+0000 mon.c (mon.1) 22 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.174229+0000 mon.a (mon.0) 42 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:48 vm00 bash[69512]: audit 2026-03-09T18:43:48.174229+0000 mon.a (mon.0) 42 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T18:43:49.381 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:49 vm00 bash[53976]: debug 2026-03-09T18:43:49.072+0000 7f991b282640 -1 mgr.server handle_report got status from non-daemon mon.a 2026-03-09T18:43:49.724 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:43:49 vm08 bash[44768]: logger=infra.usagestats t=2026-03-09T18:43:49.283762504Z level=info msg="Usage stats are ready to report" 2026-03-09T18:43:49.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:49 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:43:49] "GET /metrics HTTP/1.1" 200 34779 "" "Prometheus/2.51.0" 2026-03-09T18:43:50.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:50 vm00 bash[65531]: cluster 2026-03-09T18:43:49.072910+0000 mon.a (mon.0) 43 : cluster [DBG] mgrmap e46: y(active, since 1.05411s), standbys: x 2026-03-09T18:43:50.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:50 vm00 bash[65531]: cluster 2026-03-09T18:43:49.072910+0000 mon.a (mon.0) 43 : cluster [DBG] mgrmap e46: y(active, since 1.05411s), standbys: x 2026-03-09T18:43:50.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:50 vm00 bash[65531]: cephadm 2026-03-09T18:43:49.370832+0000 mgr.y (mgr.44107) 2 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Bus STARTING 2026-03-09T18:43:50.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:50 vm00 bash[65531]: cephadm 2026-03-09T18:43:49.370832+0000 mgr.y (mgr.44107) 2 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Bus STARTING 2026-03-09T18:43:50.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:50 vm00 bash[69512]: cluster 2026-03-09T18:43:49.072910+0000 mon.a (mon.0) 43 : cluster [DBG] mgrmap e46: y(active, since 1.05411s), standbys: x 2026-03-09T18:43:50.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:50 vm00 bash[69512]: cluster 2026-03-09T18:43:49.072910+0000 mon.a (mon.0) 43 : cluster [DBG] mgrmap e46: y(active, since 1.05411s), standbys: x 2026-03-09T18:43:50.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:50 vm00 bash[69512]: cephadm 2026-03-09T18:43:49.370832+0000 mgr.y (mgr.44107) 2 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Bus STARTING 2026-03-09T18:43:50.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:50 vm00 bash[69512]: cephadm 2026-03-09T18:43:49.370832+0000 mgr.y (mgr.44107) 2 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Bus STARTING 2026-03-09T18:43:50.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:50 vm08 bash[46122]: cluster 2026-03-09T18:43:49.072910+0000 mon.a (mon.0) 43 : cluster [DBG] mgrmap e46: y(active, since 1.05411s), standbys: x 2026-03-09T18:43:50.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:50 vm08 bash[46122]: cluster 2026-03-09T18:43:49.072910+0000 mon.a (mon.0) 43 : cluster [DBG] mgrmap e46: y(active, since 1.05411s), standbys: x 2026-03-09T18:43:50.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:50 vm08 bash[46122]: cephadm 2026-03-09T18:43:49.370832+0000 mgr.y (mgr.44107) 2 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Bus STARTING 2026-03-09T18:43:50.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:50 vm08 bash[46122]: cephadm 2026-03-09T18:43:49.370832+0000 mgr.y (mgr.44107) 2 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Bus STARTING 2026-03-09T18:43:51.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:51 vm00 bash[65531]: cephadm 2026-03-09T18:43:49.472384+0000 mgr.y (mgr.44107) 3 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Serving on http://192.168.123.100:8765 2026-03-09T18:43:51.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:51 vm00 bash[65531]: cephadm 2026-03-09T18:43:49.472384+0000 mgr.y (mgr.44107) 3 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Serving on http://192.168.123.100:8765 2026-03-09T18:43:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:51 vm00 bash[65531]: cephadm 2026-03-09T18:43:49.581467+0000 mgr.y (mgr.44107) 4 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T18:43:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:51 vm00 bash[65531]: cephadm 2026-03-09T18:43:49.581467+0000 mgr.y (mgr.44107) 4 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T18:43:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:51 vm00 bash[65531]: cephadm 2026-03-09T18:43:49.581600+0000 mgr.y (mgr.44107) 5 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Bus STARTED 2026-03-09T18:43:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:51 vm00 bash[65531]: cephadm 2026-03-09T18:43:49.581600+0000 mgr.y (mgr.44107) 5 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Bus STARTED 2026-03-09T18:43:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:51 vm00 bash[65531]: cephadm 2026-03-09T18:43:49.581836+0000 mgr.y (mgr.44107) 6 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Client ('192.168.123.100', 53256) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:43:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:51 vm00 bash[65531]: cephadm 2026-03-09T18:43:49.581836+0000 mgr.y (mgr.44107) 6 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Client ('192.168.123.100', 53256) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:43:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:51 vm00 bash[65531]: cluster 2026-03-09T18:43:50.050558+0000 mgr.y (mgr.44107) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:43:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:51 vm00 bash[65531]: cluster 2026-03-09T18:43:50.050558+0000 mgr.y (mgr.44107) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:43:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:51 vm00 bash[65531]: cluster 2026-03-09T18:43:50.110156+0000 mon.a (mon.0) 44 : cluster [DBG] mgrmap e47: y(active, since 2s), standbys: x 2026-03-09T18:43:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:51 vm00 bash[65531]: cluster 2026-03-09T18:43:50.110156+0000 mon.a (mon.0) 44 : cluster [DBG] mgrmap e47: y(active, since 2s), standbys: x 2026-03-09T18:43:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:51 vm00 bash[69512]: cephadm 2026-03-09T18:43:49.472384+0000 mgr.y (mgr.44107) 3 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Serving on http://192.168.123.100:8765 2026-03-09T18:43:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:51 vm00 bash[69512]: cephadm 2026-03-09T18:43:49.472384+0000 mgr.y (mgr.44107) 3 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Serving on http://192.168.123.100:8765 2026-03-09T18:43:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:51 vm00 bash[69512]: cephadm 2026-03-09T18:43:49.581467+0000 mgr.y (mgr.44107) 4 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T18:43:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:51 vm00 bash[69512]: cephadm 2026-03-09T18:43:49.581467+0000 mgr.y (mgr.44107) 4 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T18:43:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:51 vm00 bash[69512]: cephadm 2026-03-09T18:43:49.581600+0000 mgr.y (mgr.44107) 5 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Bus STARTED 2026-03-09T18:43:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:51 vm00 bash[69512]: cephadm 2026-03-09T18:43:49.581600+0000 mgr.y (mgr.44107) 5 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Bus STARTED 2026-03-09T18:43:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:51 vm00 bash[69512]: cephadm 2026-03-09T18:43:49.581836+0000 mgr.y (mgr.44107) 6 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Client ('192.168.123.100', 53256) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:43:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:51 vm00 bash[69512]: cephadm 2026-03-09T18:43:49.581836+0000 mgr.y (mgr.44107) 6 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Client ('192.168.123.100', 53256) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:43:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:51 vm00 bash[69512]: cluster 2026-03-09T18:43:50.050558+0000 mgr.y (mgr.44107) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:43:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:51 vm00 bash[69512]: cluster 2026-03-09T18:43:50.050558+0000 mgr.y (mgr.44107) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:43:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:51 vm00 bash[69512]: cluster 2026-03-09T18:43:50.110156+0000 mon.a (mon.0) 44 : cluster [DBG] mgrmap e47: y(active, since 2s), standbys: x 2026-03-09T18:43:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:51 vm00 bash[69512]: cluster 2026-03-09T18:43:50.110156+0000 mon.a (mon.0) 44 : cluster [DBG] mgrmap e47: y(active, since 2s), standbys: x 2026-03-09T18:43:51.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:51 vm08 bash[46122]: cephadm 2026-03-09T18:43:49.472384+0000 mgr.y (mgr.44107) 3 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Serving on http://192.168.123.100:8765 2026-03-09T18:43:51.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:51 vm08 bash[46122]: cephadm 2026-03-09T18:43:49.472384+0000 mgr.y (mgr.44107) 3 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Serving on http://192.168.123.100:8765 2026-03-09T18:43:51.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:51 vm08 bash[46122]: cephadm 2026-03-09T18:43:49.581467+0000 mgr.y (mgr.44107) 4 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T18:43:51.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:51 vm08 bash[46122]: cephadm 2026-03-09T18:43:49.581467+0000 mgr.y (mgr.44107) 4 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Serving on https://192.168.123.100:7150 2026-03-09T18:43:51.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:51 vm08 bash[46122]: cephadm 2026-03-09T18:43:49.581600+0000 mgr.y (mgr.44107) 5 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Bus STARTED 2026-03-09T18:43:51.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:51 vm08 bash[46122]: cephadm 2026-03-09T18:43:49.581600+0000 mgr.y (mgr.44107) 5 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Bus STARTED 2026-03-09T18:43:51.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:51 vm08 bash[46122]: cephadm 2026-03-09T18:43:49.581836+0000 mgr.y (mgr.44107) 6 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Client ('192.168.123.100', 53256) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:43:51.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:51 vm08 bash[46122]: cephadm 2026-03-09T18:43:49.581836+0000 mgr.y (mgr.44107) 6 : cephadm [INF] [09/Mar/2026:18:43:49] ENGINE Client ('192.168.123.100', 53256) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:43:51.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:51 vm08 bash[46122]: cluster 2026-03-09T18:43:50.050558+0000 mgr.y (mgr.44107) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:43:51.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:51 vm08 bash[46122]: cluster 2026-03-09T18:43:50.050558+0000 mgr.y (mgr.44107) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:43:51.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:51 vm08 bash[46122]: cluster 2026-03-09T18:43:50.110156+0000 mon.a (mon.0) 44 : cluster [DBG] mgrmap e47: y(active, since 2s), standbys: x 2026-03-09T18:43:51.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:51 vm08 bash[46122]: cluster 2026-03-09T18:43:50.110156+0000 mon.a (mon.0) 44 : cluster [DBG] mgrmap e47: y(active, since 2s), standbys: x 2026-03-09T18:43:53.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:53 vm00 bash[65531]: audit 2026-03-09T18:43:51.429919+0000 mgr.y (mgr.44107) 8 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:43:53.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:53 vm00 bash[65531]: audit 2026-03-09T18:43:51.429919+0000 mgr.y (mgr.44107) 8 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:43:53.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:53 vm00 bash[65531]: cluster 2026-03-09T18:43:52.050978+0000 mgr.y (mgr.44107) 9 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:43:53.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:53 vm00 bash[65531]: cluster 2026-03-09T18:43:52.050978+0000 mgr.y (mgr.44107) 9 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:43:53.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:53 vm00 bash[65531]: cluster 2026-03-09T18:43:52.135013+0000 mon.a (mon.0) 45 : cluster [DBG] mgrmap e48: y(active, since 4s), standbys: x 2026-03-09T18:43:53.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:53 vm00 bash[65531]: cluster 2026-03-09T18:43:52.135013+0000 mon.a (mon.0) 45 : cluster [DBG] mgrmap e48: y(active, since 4s), standbys: x 2026-03-09T18:43:53.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:53 vm00 bash[69512]: audit 2026-03-09T18:43:51.429919+0000 mgr.y (mgr.44107) 8 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:43:53.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:53 vm00 bash[69512]: audit 2026-03-09T18:43:51.429919+0000 mgr.y (mgr.44107) 8 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:43:53.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:53 vm00 bash[69512]: cluster 2026-03-09T18:43:52.050978+0000 mgr.y (mgr.44107) 9 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:43:53.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:53 vm00 bash[69512]: cluster 2026-03-09T18:43:52.050978+0000 mgr.y (mgr.44107) 9 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:43:53.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:53 vm00 bash[69512]: cluster 2026-03-09T18:43:52.135013+0000 mon.a (mon.0) 45 : cluster [DBG] mgrmap e48: y(active, since 4s), standbys: x 2026-03-09T18:43:53.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:53 vm00 bash[69512]: cluster 2026-03-09T18:43:52.135013+0000 mon.a (mon.0) 45 : cluster [DBG] mgrmap e48: y(active, since 4s), standbys: x 2026-03-09T18:43:53.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:53 vm08 bash[46122]: audit 2026-03-09T18:43:51.429919+0000 mgr.y (mgr.44107) 8 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:43:53.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:53 vm08 bash[46122]: audit 2026-03-09T18:43:51.429919+0000 mgr.y (mgr.44107) 8 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:43:53.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:53 vm08 bash[46122]: cluster 2026-03-09T18:43:52.050978+0000 mgr.y (mgr.44107) 9 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:43:53.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:53 vm08 bash[46122]: cluster 2026-03-09T18:43:52.050978+0000 mgr.y (mgr.44107) 9 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:43:53.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:53 vm08 bash[46122]: cluster 2026-03-09T18:43:52.135013+0000 mon.a (mon.0) 45 : cluster [DBG] mgrmap e48: y(active, since 4s), standbys: x 2026-03-09T18:43:53.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:53 vm08 bash[46122]: cluster 2026-03-09T18:43:52.135013+0000 mon.a (mon.0) 45 : cluster [DBG] mgrmap e48: y(active, since 4s), standbys: x 2026-03-09T18:43:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:55 vm00 bash[65531]: cluster 2026-03-09T18:43:54.051407+0000 mgr.y (mgr.44107) 10 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:43:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:55 vm00 bash[65531]: cluster 2026-03-09T18:43:54.051407+0000 mgr.y (mgr.44107) 10 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:43:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:55 vm00 bash[65531]: audit 2026-03-09T18:43:54.257850+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:55 vm00 bash[65531]: audit 2026-03-09T18:43:54.257850+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:55 vm00 bash[65531]: audit 2026-03-09T18:43:54.267202+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:55 vm00 bash[65531]: audit 2026-03-09T18:43:54.267202+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:55 vm00 bash[65531]: audit 2026-03-09T18:43:54.311532+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:55 vm00 bash[65531]: audit 2026-03-09T18:43:54.311532+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:55 vm00 bash[65531]: audit 2026-03-09T18:43:54.318706+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:55 vm00 bash[65531]: audit 2026-03-09T18:43:54.318706+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:55 vm00 bash[65531]: audit 2026-03-09T18:43:54.896711+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:55 vm00 bash[65531]: audit 2026-03-09T18:43:54.896711+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:55 vm00 bash[65531]: audit 2026-03-09T18:43:54.906396+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:55 vm00 bash[65531]: audit 2026-03-09T18:43:54.906396+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:55 vm00 bash[65531]: audit 2026-03-09T18:43:54.990320+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:55 vm00 bash[65531]: audit 2026-03-09T18:43:54.990320+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:55 vm00 bash[65531]: audit 2026-03-09T18:43:55.001256+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:55 vm00 bash[65531]: audit 2026-03-09T18:43:55.001256+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:55 vm00 bash[69512]: cluster 2026-03-09T18:43:54.051407+0000 mgr.y (mgr.44107) 10 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:43:55.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:55 vm00 bash[69512]: cluster 2026-03-09T18:43:54.051407+0000 mgr.y (mgr.44107) 10 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:43:55.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:55 vm00 bash[69512]: audit 2026-03-09T18:43:54.257850+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:55 vm00 bash[69512]: audit 2026-03-09T18:43:54.257850+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:55 vm00 bash[69512]: audit 2026-03-09T18:43:54.267202+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:55 vm00 bash[69512]: audit 2026-03-09T18:43:54.267202+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:55 vm00 bash[69512]: audit 2026-03-09T18:43:54.311532+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:55 vm00 bash[69512]: audit 2026-03-09T18:43:54.311532+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:55 vm00 bash[69512]: audit 2026-03-09T18:43:54.318706+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:55 vm00 bash[69512]: audit 2026-03-09T18:43:54.318706+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:55 vm00 bash[69512]: audit 2026-03-09T18:43:54.896711+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:55 vm00 bash[69512]: audit 2026-03-09T18:43:54.896711+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:55 vm00 bash[69512]: audit 2026-03-09T18:43:54.906396+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:55 vm00 bash[69512]: audit 2026-03-09T18:43:54.906396+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:55 vm00 bash[69512]: audit 2026-03-09T18:43:54.990320+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:55 vm00 bash[69512]: audit 2026-03-09T18:43:54.990320+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:55 vm00 bash[69512]: audit 2026-03-09T18:43:55.001256+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:55 vm00 bash[69512]: audit 2026-03-09T18:43:55.001256+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:55 vm08 bash[46122]: cluster 2026-03-09T18:43:54.051407+0000 mgr.y (mgr.44107) 10 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:43:55.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:55 vm08 bash[46122]: cluster 2026-03-09T18:43:54.051407+0000 mgr.y (mgr.44107) 10 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:43:55.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:55 vm08 bash[46122]: audit 2026-03-09T18:43:54.257850+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:55 vm08 bash[46122]: audit 2026-03-09T18:43:54.257850+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:55 vm08 bash[46122]: audit 2026-03-09T18:43:54.267202+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:55 vm08 bash[46122]: audit 2026-03-09T18:43:54.267202+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:55 vm08 bash[46122]: audit 2026-03-09T18:43:54.311532+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:55 vm08 bash[46122]: audit 2026-03-09T18:43:54.311532+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:55 vm08 bash[46122]: audit 2026-03-09T18:43:54.318706+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:55 vm08 bash[46122]: audit 2026-03-09T18:43:54.318706+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:55 vm08 bash[46122]: audit 2026-03-09T18:43:54.896711+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:55 vm08 bash[46122]: audit 2026-03-09T18:43:54.896711+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:55 vm08 bash[46122]: audit 2026-03-09T18:43:54.906396+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:55 vm08 bash[46122]: audit 2026-03-09T18:43:54.906396+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:55 vm08 bash[46122]: audit 2026-03-09T18:43:54.990320+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:55 vm08 bash[46122]: audit 2026-03-09T18:43:54.990320+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:55 vm08 bash[46122]: audit 2026-03-09T18:43:55.001256+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:55.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:55 vm08 bash[46122]: audit 2026-03-09T18:43:55.001256+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:43:57.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:57 vm00 bash[65531]: cluster 2026-03-09T18:43:56.051980+0000 mgr.y (mgr.44107) 11 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T18:43:57.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:57 vm00 bash[65531]: cluster 2026-03-09T18:43:56.051980+0000 mgr.y (mgr.44107) 11 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T18:43:57.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:57 vm00 bash[69512]: cluster 2026-03-09T18:43:56.051980+0000 mgr.y (mgr.44107) 11 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T18:43:57.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:57 vm00 bash[69512]: cluster 2026-03-09T18:43:56.051980+0000 mgr.y (mgr.44107) 11 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T18:43:57.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:57 vm08 bash[46122]: cluster 2026-03-09T18:43:56.051980+0000 mgr.y (mgr.44107) 11 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T18:43:57.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:57 vm08 bash[46122]: cluster 2026-03-09T18:43:56.051980+0000 mgr.y (mgr.44107) 11 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T18:43:59.028 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:43:59.456 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:43:59.456 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (14m) 5s ago 21m 14.5M - 0.25.0 c8568f914cd2 2a8a29ecee54 2026-03-09T18:43:59.456 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (103s) 5s ago 21m 65.0M - 10.4.0 c8b91775d855 5e0e30d27ab2 2026-03-09T18:43:59.456 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (2m) 5s ago 20m 43.5M - 3.5 e1d6a67b021e ff3da66cebe9 2026-03-09T18:43:59.456 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443,9283,8765 running (2m) 5s ago 23m 462M - 19.2.3-678-ge911bdeb 654f31e6858e e51f08afe84e 2026-03-09T18:43:59.456 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:8443,9283,8765 running (11m) 5s ago 24m 508M - 19.2.3-678-ge911bdeb 654f31e6858e 838266ced7b2 2026-03-09T18:43:59.456 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (15s) 5s ago 24m 37.0M 2048M 19.2.3-678-ge911bdeb 654f31e6858e eb9fca83668a 2026-03-09T18:43:59.456 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (65s) 5s ago 24m 37.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a343d2673f4 2026-03-09T18:43:59.456 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (29s) 5s ago 24m 36.0M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 5c50cbcbef6b 2026-03-09T18:43:59.456 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (14m) 5s ago 21m 7560k - 1.7.0 72c9c2088986 c2e3e3202fde 2026-03-09T18:43:59.456 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (14m) 5s ago 21m 7956k - 1.7.0 72c9c2088986 7a7d1ed8c801 2026-03-09T18:43:59.456 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (23m) 5s ago 23m 53.2M 4096M 17.2.0 e1d6a67b021e ab692a994bc3 2026-03-09T18:43:59.456 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (23m) 5s ago 23m 55.3M 4096M 17.2.0 e1d6a67b021e d8607e6df9d6 2026-03-09T18:43:59.457 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (23m) 5s ago 23m 49.3M 4096M 17.2.0 e1d6a67b021e 35e072ab4c22 2026-03-09T18:43:59.457 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (22m) 5s ago 22m 55.6M 4096M 17.2.0 e1d6a67b021e 306d680cc55b 2026-03-09T18:43:59.457 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (22m) 5s ago 22m 54.9M 4096M 17.2.0 e1d6a67b021e 781045b06a16 2026-03-09T18:43:59.457 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (22m) 5s ago 22m 53.9M 4096M 17.2.0 e1d6a67b021e 28b71e1b7c1b 2026-03-09T18:43:59.457 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (22m) 5s ago 22m 52.7M 4096M 17.2.0 e1d6a67b021e 80e1a58dd2f5 2026-03-09T18:43:59.457 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (21m) 5s ago 21m 52.2M 4096M 17.2.0 e1d6a67b021e 4f91765b51cf 2026-03-09T18:43:59.457 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (2m) 5s ago 21m 41.3M - 2.51.0 1d3b7f56885b 2dc1789f7bf0 2026-03-09T18:43:59.457 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (20m) 5s ago 20m 89.2M - 17.2.0 e1d6a67b021e 671fa80b7e00 2026-03-09T18:43:59.457 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (20m) 5s ago 20m 89.6M - 17.2.0 e1d6a67b021e 1fbcce983317 2026-03-09T18:43:59.523 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:59 vm00 bash[65531]: cluster 2026-03-09T18:43:58.052249+0000 mgr.y (mgr.44107) 12 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T18:43:59.524 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:43:59 vm00 bash[65531]: cluster 2026-03-09T18:43:58.052249+0000 mgr.y (mgr.44107) 12 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T18:43:59.524 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:59 vm00 bash[69512]: cluster 2026-03-09T18:43:58.052249+0000 mgr.y (mgr.44107) 12 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T18:43:59.524 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:43:59 vm00 bash[69512]: cluster 2026-03-09T18:43:58.052249+0000 mgr.y (mgr.44107) 12 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T18:43:59.711 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:43:59.711 INFO:teuthology.orchestra.run.vm00.stdout: "mon": { 2026-03-09T18:43:59.711 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-09T18:43:59.711 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:43:59.711 INFO:teuthology.orchestra.run.vm00.stdout: "mgr": { 2026-03-09T18:43:59.711 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:43:59.711 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:43:59.711 INFO:teuthology.orchestra.run.vm00.stdout: "osd": { 2026-03-09T18:43:59.711 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-09T18:43:59.711 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:43:59.711 INFO:teuthology.orchestra.run.vm00.stdout: "rgw": { 2026-03-09T18:43:59.711 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-09T18:43:59.711 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:43:59.711 INFO:teuthology.orchestra.run.vm00.stdout: "overall": { 2026-03-09T18:43:59.711 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 10, 2026-03-09T18:43:59.711 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 5 2026-03-09T18:43:59.711 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:43:59.712 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:43:59.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:59 vm08 bash[46122]: cluster 2026-03-09T18:43:58.052249+0000 mgr.y (mgr.44107) 12 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T18:43:59.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:43:59 vm08 bash[46122]: cluster 2026-03-09T18:43:58.052249+0000 mgr.y (mgr.44107) 12 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T18:43:59.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:43:59 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:43:59] "GET /metrics HTTP/1.1" 200 34779 "" "Prometheus/2.51.0" 2026-03-09T18:43:59.987 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:43:59.987 INFO:teuthology.orchestra.run.vm00.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T18:43:59.988 INFO:teuthology.orchestra.run.vm00.stdout: "in_progress": true, 2026-03-09T18:43:59.988 INFO:teuthology.orchestra.run.vm00.stdout: "which": "Upgrading daemons of type(s) mon on host(s) vm00", 2026-03-09T18:43:59.988 INFO:teuthology.orchestra.run.vm00.stdout: "services_complete": [ 2026-03-09T18:43:59.988 INFO:teuthology.orchestra.run.vm00.stdout: "mon" 2026-03-09T18:43:59.988 INFO:teuthology.orchestra.run.vm00.stdout: ], 2026-03-09T18:43:59.988 INFO:teuthology.orchestra.run.vm00.stdout: "progress": "2/2 daemons upgraded", 2026-03-09T18:43:59.988 INFO:teuthology.orchestra.run.vm00.stdout: "message": "", 2026-03-09T18:43:59.988 INFO:teuthology.orchestra.run.vm00.stdout: "is_paused": false 2026-03-09T18:43:59.988 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:44:00.600 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:00 vm08 bash[46122]: audit 2026-03-09T18:43:59.020106+0000 mgr.y (mgr.44107) 13 : audit [DBG] from='client.44134 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:00.600 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:00 vm08 bash[46122]: audit 2026-03-09T18:43:59.020106+0000 mgr.y (mgr.44107) 13 : audit [DBG] from='client.44134 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:00.600 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:00 vm08 bash[46122]: audit 2026-03-09T18:43:59.244062+0000 mgr.y (mgr.44107) 14 : audit [DBG] from='client.34166 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:00.600 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:00 vm08 bash[46122]: audit 2026-03-09T18:43:59.244062+0000 mgr.y (mgr.44107) 14 : audit [DBG] from='client.34166 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:00.600 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:00 vm08 bash[46122]: audit 2026-03-09T18:43:59.714651+0000 mon.a (mon.0) 54 : audit [DBG] from='client.? 192.168.123.100:0/1141323004' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:00.600 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:00 vm08 bash[46122]: audit 2026-03-09T18:43:59.714651+0000 mon.a (mon.0) 54 : audit [DBG] from='client.? 192.168.123.100:0/1141323004' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:00.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:00 vm00 bash[65531]: audit 2026-03-09T18:43:59.020106+0000 mgr.y (mgr.44107) 13 : audit [DBG] from='client.44134 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:00.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:00 vm00 bash[65531]: audit 2026-03-09T18:43:59.020106+0000 mgr.y (mgr.44107) 13 : audit [DBG] from='client.44134 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:00.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:00 vm00 bash[65531]: audit 2026-03-09T18:43:59.244062+0000 mgr.y (mgr.44107) 14 : audit [DBG] from='client.34166 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:00.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:00 vm00 bash[65531]: audit 2026-03-09T18:43:59.244062+0000 mgr.y (mgr.44107) 14 : audit [DBG] from='client.34166 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:00.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:00 vm00 bash[65531]: audit 2026-03-09T18:43:59.714651+0000 mon.a (mon.0) 54 : audit [DBG] from='client.? 192.168.123.100:0/1141323004' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:00.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:00 vm00 bash[65531]: audit 2026-03-09T18:43:59.714651+0000 mon.a (mon.0) 54 : audit [DBG] from='client.? 192.168.123.100:0/1141323004' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:00.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:00 vm00 bash[69512]: audit 2026-03-09T18:43:59.020106+0000 mgr.y (mgr.44107) 13 : audit [DBG] from='client.44134 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:00.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:00 vm00 bash[69512]: audit 2026-03-09T18:43:59.020106+0000 mgr.y (mgr.44107) 13 : audit [DBG] from='client.44134 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:00.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:00 vm00 bash[69512]: audit 2026-03-09T18:43:59.244062+0000 mgr.y (mgr.44107) 14 : audit [DBG] from='client.34166 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:00.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:00 vm00 bash[69512]: audit 2026-03-09T18:43:59.244062+0000 mgr.y (mgr.44107) 14 : audit [DBG] from='client.34166 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:00.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:00 vm00 bash[69512]: audit 2026-03-09T18:43:59.714651+0000 mon.a (mon.0) 54 : audit [DBG] from='client.? 192.168.123.100:0/1141323004' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:00.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:00 vm00 bash[69512]: audit 2026-03-09T18:43:59.714651+0000 mon.a (mon.0) 54 : audit [DBG] from='client.? 192.168.123.100:0/1141323004' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:43:59.455607+0000 mgr.y (mgr.44107) 15 : audit [DBG] from='client.44146 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:43:59.455607+0000 mgr.y (mgr.44107) 15 : audit [DBG] from='client.44146 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:43:59.990854+0000 mgr.y (mgr.44107) 16 : audit [DBG] from='client.44158 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:43:59.990854+0000 mgr.y (mgr.44107) 16 : audit [DBG] from='client.44158 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: cluster 2026-03-09T18:44:00.052799+0000 mgr.y (mgr.44107) 17 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: cluster 2026-03-09T18:44:00.052799+0000 mgr.y (mgr.44107) 17 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:00.668329+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:00.668329+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:00.677799+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:00.677799+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:00.681975+0000 mon.c (mon.1) 23 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:00.681975+0000 mon.c (mon.1) 23 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:00.682507+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:00.682507+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:00.831929+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:00.831929+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:00.844054+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:00.844054+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:00.848507+0000 mon.c (mon.1) 24 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:00.848507+0000 mon.c (mon.1) 24 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:00.848961+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:00.848961+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:00.850172+0000 mon.c (mon.1) 25 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:00.850172+0000 mon.c (mon.1) 25 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:00.850981+0000 mon.c (mon.1) 26 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:00.850981+0000 mon.c (mon.1) 26 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:01.029112+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:01.029112+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:01.040380+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:01.040380+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:01.049907+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:01.049907+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:01.056526+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:01.056526+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:01.061780+0000 mon.a (mon.0) 65 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:01.061780+0000 mon.a (mon.0) 65 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:01.072667+0000 mon.c (mon.1) 27 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:01.072667+0000 mon.c (mon.1) 27 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:01.073247+0000 mon.c (mon.1) 28 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:01 vm00 bash[69512]: audit 2026-03-09T18:44:01.073247+0000 mon.c (mon.1) 28 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:43:59.455607+0000 mgr.y (mgr.44107) 15 : audit [DBG] from='client.44146 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:43:59.455607+0000 mgr.y (mgr.44107) 15 : audit [DBG] from='client.44146 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:43:59.990854+0000 mgr.y (mgr.44107) 16 : audit [DBG] from='client.44158 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:43:59.990854+0000 mgr.y (mgr.44107) 16 : audit [DBG] from='client.44158 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: cluster 2026-03-09T18:44:00.052799+0000 mgr.y (mgr.44107) 17 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: cluster 2026-03-09T18:44:00.052799+0000 mgr.y (mgr.44107) 17 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:00.668329+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:00.668329+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:00.677799+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:00.677799+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:00.681975+0000 mon.c (mon.1) 23 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:00.681975+0000 mon.c (mon.1) 23 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:00.682507+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:00.682507+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:00.831929+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:00.831929+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:00.844054+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:00.844054+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:00.848507+0000 mon.c (mon.1) 24 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:00.848507+0000 mon.c (mon.1) 24 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:00.848961+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:00.848961+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:00.850172+0000 mon.c (mon.1) 25 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:00.850172+0000 mon.c (mon.1) 25 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:00.850981+0000 mon.c (mon.1) 26 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:00.850981+0000 mon.c (mon.1) 26 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:01.029112+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:01.029112+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:01.040380+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:01.040380+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:01.049907+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:01.049907+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:01.056526+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:01.056526+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:01.061780+0000 mon.a (mon.0) 65 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:01.061780+0000 mon.a (mon.0) 65 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:01.072667+0000 mon.c (mon.1) 27 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:01.072667+0000 mon.c (mon.1) 27 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:01.073247+0000 mon.c (mon.1) 28 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:01.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:01 vm00 bash[65531]: audit 2026-03-09T18:44:01.073247+0000 mon.c (mon.1) 28 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:01.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:43:59.455607+0000 mgr.y (mgr.44107) 15 : audit [DBG] from='client.44146 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:01.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:43:59.455607+0000 mgr.y (mgr.44107) 15 : audit [DBG] from='client.44146 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:01.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:43:59.990854+0000 mgr.y (mgr.44107) 16 : audit [DBG] from='client.44158 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:01.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:43:59.990854+0000 mgr.y (mgr.44107) 16 : audit [DBG] from='client.44158 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:01.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: cluster 2026-03-09T18:44:00.052799+0000 mgr.y (mgr.44107) 17 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:44:01.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: cluster 2026-03-09T18:44:00.052799+0000 mgr.y (mgr.44107) 17 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:44:01.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:00.668329+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:00.668329+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:00.677799+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:00.677799+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:00.681975+0000 mon.c (mon.1) 23 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:00.681975+0000 mon.c (mon.1) 23 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:00.682507+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:00.682507+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:00.831929+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:00.831929+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:00.844054+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:00.844054+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:00.848507+0000 mon.c (mon.1) 24 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:00.848507+0000 mon.c (mon.1) 24 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:00.848961+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:00.848961+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:00.850172+0000 mon.c (mon.1) 25 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:00.850172+0000 mon.c (mon.1) 25 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:00.850981+0000 mon.c (mon.1) 26 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:00.850981+0000 mon.c (mon.1) 26 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:01.029112+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:01.029112+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:01.040380+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:01.040380+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:01.049907+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:01.049907+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:01.056526+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:01.056526+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:01.061780+0000 mon.a (mon.0) 65 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:01.061780+0000 mon.a (mon.0) 65 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:01.072667+0000 mon.c (mon.1) 27 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:01.072667+0000 mon.c (mon.1) 27 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:01.073247+0000 mon.c (mon.1) 28 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:01 vm08 bash[46122]: audit 2026-03-09T18:44:01.073247+0000 mon.c (mon.1) 28 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: cephadm 2026-03-09T18:44:00.852325+0000 mgr.y (mgr.44107) 18 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: cephadm 2026-03-09T18:44:00.852325+0000 mgr.y (mgr.44107) 18 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: cephadm 2026-03-09T18:44:00.852840+0000 mgr.y (mgr.44107) 19 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: cephadm 2026-03-09T18:44:00.852840+0000 mgr.y (mgr.44107) 19 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: cephadm 2026-03-09T18:44:00.893783+0000 mgr.y (mgr.44107) 20 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: cephadm 2026-03-09T18:44:00.893783+0000 mgr.y (mgr.44107) 20 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: cephadm 2026-03-09T18:44:00.901187+0000 mgr.y (mgr.44107) 21 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: cephadm 2026-03-09T18:44:00.901187+0000 mgr.y (mgr.44107) 21 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: cephadm 2026-03-09T18:44:00.938489+0000 mgr.y (mgr.44107) 22 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: cephadm 2026-03-09T18:44:00.938489+0000 mgr.y (mgr.44107) 22 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: cephadm 2026-03-09T18:44:00.949311+0000 mgr.y (mgr.44107) 23 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: cephadm 2026-03-09T18:44:00.949311+0000 mgr.y (mgr.44107) 23 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: cephadm 2026-03-09T18:44:00.977603+0000 mgr.y (mgr.44107) 24 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: cephadm 2026-03-09T18:44:00.977603+0000 mgr.y (mgr.44107) 24 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: cephadm 2026-03-09T18:44:00.992798+0000 mgr.y (mgr.44107) 25 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: cephadm 2026-03-09T18:44:00.992798+0000 mgr.y (mgr.44107) 25 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: cephadm 2026-03-09T18:44:01.072502+0000 mgr.y (mgr.44107) 26 : cephadm [INF] Reconfiguring osd.3 (monmap changed)... 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: cephadm 2026-03-09T18:44:01.072502+0000 mgr.y (mgr.44107) 26 : cephadm [INF] Reconfiguring osd.3 (monmap changed)... 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: cephadm 2026-03-09T18:44:01.074596+0000 mgr.y (mgr.44107) 27 : cephadm [INF] Reconfiguring daemon osd.3 on vm00 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: cephadm 2026-03-09T18:44:01.074596+0000 mgr.y (mgr.44107) 27 : cephadm [INF] Reconfiguring daemon osd.3 on vm00 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: audit 2026-03-09T18:44:01.525885+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: audit 2026-03-09T18:44:01.525885+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: audit 2026-03-09T18:44:01.534477+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: audit 2026-03-09T18:44:01.534477+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: cephadm 2026-03-09T18:44:00.852325+0000 mgr.y (mgr.44107) 18 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: cephadm 2026-03-09T18:44:00.852325+0000 mgr.y (mgr.44107) 18 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: cephadm 2026-03-09T18:44:00.852840+0000 mgr.y (mgr.44107) 19 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: cephadm 2026-03-09T18:44:00.852840+0000 mgr.y (mgr.44107) 19 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: cephadm 2026-03-09T18:44:00.893783+0000 mgr.y (mgr.44107) 20 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: cephadm 2026-03-09T18:44:00.893783+0000 mgr.y (mgr.44107) 20 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: cephadm 2026-03-09T18:44:00.901187+0000 mgr.y (mgr.44107) 21 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: cephadm 2026-03-09T18:44:00.901187+0000 mgr.y (mgr.44107) 21 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: cephadm 2026-03-09T18:44:00.938489+0000 mgr.y (mgr.44107) 22 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: cephadm 2026-03-09T18:44:00.938489+0000 mgr.y (mgr.44107) 22 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:44:02.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: cephadm 2026-03-09T18:44:00.949311+0000 mgr.y (mgr.44107) 23 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: cephadm 2026-03-09T18:44:00.949311+0000 mgr.y (mgr.44107) 23 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: cephadm 2026-03-09T18:44:00.977603+0000 mgr.y (mgr.44107) 24 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: cephadm 2026-03-09T18:44:00.977603+0000 mgr.y (mgr.44107) 24 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: cephadm 2026-03-09T18:44:00.992798+0000 mgr.y (mgr.44107) 25 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: cephadm 2026-03-09T18:44:00.992798+0000 mgr.y (mgr.44107) 25 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: cephadm 2026-03-09T18:44:01.072502+0000 mgr.y (mgr.44107) 26 : cephadm [INF] Reconfiguring osd.3 (monmap changed)... 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: cephadm 2026-03-09T18:44:01.072502+0000 mgr.y (mgr.44107) 26 : cephadm [INF] Reconfiguring osd.3 (monmap changed)... 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: cephadm 2026-03-09T18:44:01.074596+0000 mgr.y (mgr.44107) 27 : cephadm [INF] Reconfiguring daemon osd.3 on vm00 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: cephadm 2026-03-09T18:44:01.074596+0000 mgr.y (mgr.44107) 27 : cephadm [INF] Reconfiguring daemon osd.3 on vm00 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: audit 2026-03-09T18:44:01.525885+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: audit 2026-03-09T18:44:01.525885+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: audit 2026-03-09T18:44:01.534477+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: audit 2026-03-09T18:44:01.534477+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: audit 2026-03-09T18:44:01.536686+0000 mon.c (mon.1) 29 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: audit 2026-03-09T18:44:01.536686+0000 mon.c (mon.1) 29 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: audit 2026-03-09T18:44:01.537423+0000 mon.c (mon.1) 30 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: audit 2026-03-09T18:44:01.537423+0000 mon.c (mon.1) 30 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: audit 2026-03-09T18:44:01.975204+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: audit 2026-03-09T18:44:01.975204+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: audit 2026-03-09T18:44:01.984454+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: audit 2026-03-09T18:44:01.984454+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: audit 2026-03-09T18:44:01.986808+0000 mon.c (mon.1) 31 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: audit 2026-03-09T18:44:01.986808+0000 mon.c (mon.1) 31 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: audit 2026-03-09T18:44:01.987510+0000 mon.c (mon.1) 32 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: audit 2026-03-09T18:44:01.987510+0000 mon.c (mon.1) 32 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: audit 2026-03-09T18:44:01.988104+0000 mon.c (mon.1) 33 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:02 vm00 bash[65531]: audit 2026-03-09T18:44:01.988104+0000 mon.c (mon.1) 33 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: audit 2026-03-09T18:44:01.536686+0000 mon.c (mon.1) 29 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: audit 2026-03-09T18:44:01.536686+0000 mon.c (mon.1) 29 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: audit 2026-03-09T18:44:01.537423+0000 mon.c (mon.1) 30 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: audit 2026-03-09T18:44:01.537423+0000 mon.c (mon.1) 30 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: audit 2026-03-09T18:44:01.975204+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: audit 2026-03-09T18:44:01.975204+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: audit 2026-03-09T18:44:01.984454+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: audit 2026-03-09T18:44:01.984454+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: audit 2026-03-09T18:44:01.986808+0000 mon.c (mon.1) 31 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: audit 2026-03-09T18:44:01.986808+0000 mon.c (mon.1) 31 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: audit 2026-03-09T18:44:01.987510+0000 mon.c (mon.1) 32 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: audit 2026-03-09T18:44:01.987510+0000 mon.c (mon.1) 32 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: audit 2026-03-09T18:44:01.988104+0000 mon.c (mon.1) 33 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:02.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:02 vm00 bash[69512]: audit 2026-03-09T18:44:01.988104+0000 mon.c (mon.1) 33 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: cephadm 2026-03-09T18:44:00.852325+0000 mgr.y (mgr.44107) 18 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:44:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: cephadm 2026-03-09T18:44:00.852325+0000 mgr.y (mgr.44107) 18 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-09T18:44:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: cephadm 2026-03-09T18:44:00.852840+0000 mgr.y (mgr.44107) 19 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T18:44:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: cephadm 2026-03-09T18:44:00.852840+0000 mgr.y (mgr.44107) 19 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T18:44:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: cephadm 2026-03-09T18:44:00.893783+0000 mgr.y (mgr.44107) 20 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:44:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: cephadm 2026-03-09T18:44:00.893783+0000 mgr.y (mgr.44107) 20 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:44:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: cephadm 2026-03-09T18:44:00.901187+0000 mgr.y (mgr.44107) 21 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:44:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: cephadm 2026-03-09T18:44:00.901187+0000 mgr.y (mgr.44107) 21 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.conf 2026-03-09T18:44:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: cephadm 2026-03-09T18:44:00.938489+0000 mgr.y (mgr.44107) 22 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:44:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: cephadm 2026-03-09T18:44:00.938489+0000 mgr.y (mgr.44107) 22 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:44:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: cephadm 2026-03-09T18:44:00.949311+0000 mgr.y (mgr.44107) 23 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:44:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: cephadm 2026-03-09T18:44:00.949311+0000 mgr.y (mgr.44107) 23 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:44:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: cephadm 2026-03-09T18:44:00.977603+0000 mgr.y (mgr.44107) 24 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:44:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: cephadm 2026-03-09T18:44:00.977603+0000 mgr.y (mgr.44107) 24 : cephadm [INF] Updating vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:44:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: cephadm 2026-03-09T18:44:00.992798+0000 mgr.y (mgr.44107) 25 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:44:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: cephadm 2026-03-09T18:44:00.992798+0000 mgr.y (mgr.44107) 25 : cephadm [INF] Updating vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/config/ceph.client.admin.keyring 2026-03-09T18:44:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: cephadm 2026-03-09T18:44:01.072502+0000 mgr.y (mgr.44107) 26 : cephadm [INF] Reconfiguring osd.3 (monmap changed)... 2026-03-09T18:44:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: cephadm 2026-03-09T18:44:01.072502+0000 mgr.y (mgr.44107) 26 : cephadm [INF] Reconfiguring osd.3 (monmap changed)... 2026-03-09T18:44:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: cephadm 2026-03-09T18:44:01.074596+0000 mgr.y (mgr.44107) 27 : cephadm [INF] Reconfiguring daemon osd.3 on vm00 2026-03-09T18:44:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: cephadm 2026-03-09T18:44:01.074596+0000 mgr.y (mgr.44107) 27 : cephadm [INF] Reconfiguring daemon osd.3 on vm00 2026-03-09T18:44:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: audit 2026-03-09T18:44:01.525885+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:02.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: audit 2026-03-09T18:44:01.525885+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:02.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: audit 2026-03-09T18:44:01.534477+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:02.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: audit 2026-03-09T18:44:01.534477+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:02.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: audit 2026-03-09T18:44:01.536686+0000 mon.c (mon.1) 29 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T18:44:02.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: audit 2026-03-09T18:44:01.536686+0000 mon.c (mon.1) 29 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T18:44:02.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: audit 2026-03-09T18:44:01.537423+0000 mon.c (mon.1) 30 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:02.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: audit 2026-03-09T18:44:01.537423+0000 mon.c (mon.1) 30 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:02.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: audit 2026-03-09T18:44:01.975204+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:02.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: audit 2026-03-09T18:44:01.975204+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:02.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: audit 2026-03-09T18:44:01.984454+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:02.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: audit 2026-03-09T18:44:01.984454+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:02.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: audit 2026-03-09T18:44:01.986808+0000 mon.c (mon.1) 31 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:44:02.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: audit 2026-03-09T18:44:01.986808+0000 mon.c (mon.1) 31 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:44:02.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: audit 2026-03-09T18:44:01.987510+0000 mon.c (mon.1) 32 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:44:02.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: audit 2026-03-09T18:44:01.987510+0000 mon.c (mon.1) 32 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:44:02.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: audit 2026-03-09T18:44:01.988104+0000 mon.c (mon.1) 33 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:02.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:02 vm08 bash[46122]: audit 2026-03-09T18:44:01.988104+0000 mon.c (mon.1) 33 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: audit 2026-03-09T18:44:01.437747+0000 mgr.y (mgr.44107) 28 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: audit 2026-03-09T18:44:01.437747+0000 mgr.y (mgr.44107) 28 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: cephadm 2026-03-09T18:44:01.536482+0000 mgr.y (mgr.44107) 29 : cephadm [INF] Reconfiguring osd.2 (monmap changed)... 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: cephadm 2026-03-09T18:44:01.536482+0000 mgr.y (mgr.44107) 29 : cephadm [INF] Reconfiguring osd.2 (monmap changed)... 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: cephadm 2026-03-09T18:44:01.539069+0000 mgr.y (mgr.44107) 30 : cephadm [INF] Reconfiguring daemon osd.2 on vm00 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: cephadm 2026-03-09T18:44:01.539069+0000 mgr.y (mgr.44107) 30 : cephadm [INF] Reconfiguring daemon osd.2 on vm00 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: cephadm 2026-03-09T18:44:01.986585+0000 mgr.y (mgr.44107) 31 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: cephadm 2026-03-09T18:44:01.986585+0000 mgr.y (mgr.44107) 31 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: cephadm 2026-03-09T18:44:01.988747+0000 mgr.y (mgr.44107) 32 : cephadm [INF] Reconfiguring daemon mon.c on vm00 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: cephadm 2026-03-09T18:44:01.988747+0000 mgr.y (mgr.44107) 32 : cephadm [INF] Reconfiguring daemon mon.c on vm00 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: cluster 2026-03-09T18:44:02.053121+0000 mgr.y (mgr.44107) 33 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: cluster 2026-03-09T18:44:02.053121+0000 mgr.y (mgr.44107) 33 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: audit 2026-03-09T18:44:02.419428+0000 mon.a (mon.0) 70 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: audit 2026-03-09T18:44:02.419428+0000 mon.a (mon.0) 70 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: audit 2026-03-09T18:44:02.427900+0000 mon.a (mon.0) 71 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: audit 2026-03-09T18:44:02.427900+0000 mon.a (mon.0) 71 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: audit 2026-03-09T18:44:02.430674+0000 mon.c (mon.1) 34 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: audit 2026-03-09T18:44:02.430674+0000 mon.c (mon.1) 34 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: audit 2026-03-09T18:44:02.431313+0000 mon.c (mon.1) 35 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: audit 2026-03-09T18:44:02.431313+0000 mon.c (mon.1) 35 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: audit 2026-03-09T18:44:02.889321+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: audit 2026-03-09T18:44:02.889321+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: audit 2026-03-09T18:44:02.899824+0000 mon.a (mon.0) 73 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: audit 2026-03-09T18:44:02.899824+0000 mon.a (mon.0) 73 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: audit 2026-03-09T18:44:02.901660+0000 mon.c (mon.1) 36 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: audit 2026-03-09T18:44:02.901660+0000 mon.c (mon.1) 36 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: audit 2026-03-09T18:44:02.902521+0000 mon.c (mon.1) 37 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: audit 2026-03-09T18:44:02.902521+0000 mon.c (mon.1) 37 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: audit 2026-03-09T18:44:02.903320+0000 mon.c (mon.1) 38 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: audit 2026-03-09T18:44:02.903320+0000 mon.c (mon.1) 38 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: audit 2026-03-09T18:44:03.098062+0000 mon.c (mon.1) 39 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:03 vm00 bash[65531]: audit 2026-03-09T18:44:03.098062+0000 mon.c (mon.1) 39 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: audit 2026-03-09T18:44:01.437747+0000 mgr.y (mgr.44107) 28 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: audit 2026-03-09T18:44:01.437747+0000 mgr.y (mgr.44107) 28 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: cephadm 2026-03-09T18:44:01.536482+0000 mgr.y (mgr.44107) 29 : cephadm [INF] Reconfiguring osd.2 (monmap changed)... 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: cephadm 2026-03-09T18:44:01.536482+0000 mgr.y (mgr.44107) 29 : cephadm [INF] Reconfiguring osd.2 (monmap changed)... 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: cephadm 2026-03-09T18:44:01.539069+0000 mgr.y (mgr.44107) 30 : cephadm [INF] Reconfiguring daemon osd.2 on vm00 2026-03-09T18:44:03.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: cephadm 2026-03-09T18:44:01.539069+0000 mgr.y (mgr.44107) 30 : cephadm [INF] Reconfiguring daemon osd.2 on vm00 2026-03-09T18:44:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: cephadm 2026-03-09T18:44:01.986585+0000 mgr.y (mgr.44107) 31 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T18:44:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: cephadm 2026-03-09T18:44:01.986585+0000 mgr.y (mgr.44107) 31 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T18:44:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: cephadm 2026-03-09T18:44:01.988747+0000 mgr.y (mgr.44107) 32 : cephadm [INF] Reconfiguring daemon mon.c on vm00 2026-03-09T18:44:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: cephadm 2026-03-09T18:44:01.988747+0000 mgr.y (mgr.44107) 32 : cephadm [INF] Reconfiguring daemon mon.c on vm00 2026-03-09T18:44:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: cluster 2026-03-09T18:44:02.053121+0000 mgr.y (mgr.44107) 33 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T18:44:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: cluster 2026-03-09T18:44:02.053121+0000 mgr.y (mgr.44107) 33 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T18:44:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: audit 2026-03-09T18:44:02.419428+0000 mon.a (mon.0) 70 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: audit 2026-03-09T18:44:02.419428+0000 mon.a (mon.0) 70 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: audit 2026-03-09T18:44:02.427900+0000 mon.a (mon.0) 71 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: audit 2026-03-09T18:44:02.427900+0000 mon.a (mon.0) 71 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: audit 2026-03-09T18:44:02.430674+0000 mon.c (mon.1) 34 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T18:44:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: audit 2026-03-09T18:44:02.430674+0000 mon.c (mon.1) 34 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T18:44:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: audit 2026-03-09T18:44:02.431313+0000 mon.c (mon.1) 35 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: audit 2026-03-09T18:44:02.431313+0000 mon.c (mon.1) 35 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: audit 2026-03-09T18:44:02.889321+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: audit 2026-03-09T18:44:02.889321+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: audit 2026-03-09T18:44:02.899824+0000 mon.a (mon.0) 73 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: audit 2026-03-09T18:44:02.899824+0000 mon.a (mon.0) 73 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: audit 2026-03-09T18:44:02.901660+0000 mon.c (mon.1) 36 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:44:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: audit 2026-03-09T18:44:02.901660+0000 mon.c (mon.1) 36 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:44:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: audit 2026-03-09T18:44:02.902521+0000 mon.c (mon.1) 37 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:44:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: audit 2026-03-09T18:44:02.902521+0000 mon.c (mon.1) 37 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:44:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: audit 2026-03-09T18:44:02.903320+0000 mon.c (mon.1) 38 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: audit 2026-03-09T18:44:02.903320+0000 mon.c (mon.1) 38 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: audit 2026-03-09T18:44:03.098062+0000 mon.c (mon.1) 39 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:44:03.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:03 vm00 bash[69512]: audit 2026-03-09T18:44:03.098062+0000 mon.c (mon.1) 39 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:44:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: audit 2026-03-09T18:44:01.437747+0000 mgr.y (mgr.44107) 28 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: audit 2026-03-09T18:44:01.437747+0000 mgr.y (mgr.44107) 28 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: cephadm 2026-03-09T18:44:01.536482+0000 mgr.y (mgr.44107) 29 : cephadm [INF] Reconfiguring osd.2 (monmap changed)... 2026-03-09T18:44:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: cephadm 2026-03-09T18:44:01.536482+0000 mgr.y (mgr.44107) 29 : cephadm [INF] Reconfiguring osd.2 (monmap changed)... 2026-03-09T18:44:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: cephadm 2026-03-09T18:44:01.539069+0000 mgr.y (mgr.44107) 30 : cephadm [INF] Reconfiguring daemon osd.2 on vm00 2026-03-09T18:44:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: cephadm 2026-03-09T18:44:01.539069+0000 mgr.y (mgr.44107) 30 : cephadm [INF] Reconfiguring daemon osd.2 on vm00 2026-03-09T18:44:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: cephadm 2026-03-09T18:44:01.986585+0000 mgr.y (mgr.44107) 31 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T18:44:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: cephadm 2026-03-09T18:44:01.986585+0000 mgr.y (mgr.44107) 31 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T18:44:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: cephadm 2026-03-09T18:44:01.988747+0000 mgr.y (mgr.44107) 32 : cephadm [INF] Reconfiguring daemon mon.c on vm00 2026-03-09T18:44:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: cephadm 2026-03-09T18:44:01.988747+0000 mgr.y (mgr.44107) 32 : cephadm [INF] Reconfiguring daemon mon.c on vm00 2026-03-09T18:44:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: cluster 2026-03-09T18:44:02.053121+0000 mgr.y (mgr.44107) 33 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T18:44:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: cluster 2026-03-09T18:44:02.053121+0000 mgr.y (mgr.44107) 33 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T18:44:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: audit 2026-03-09T18:44:02.419428+0000 mon.a (mon.0) 70 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: audit 2026-03-09T18:44:02.419428+0000 mon.a (mon.0) 70 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: audit 2026-03-09T18:44:02.427900+0000 mon.a (mon.0) 71 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: audit 2026-03-09T18:44:02.427900+0000 mon.a (mon.0) 71 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: audit 2026-03-09T18:44:02.430674+0000 mon.c (mon.1) 34 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T18:44:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: audit 2026-03-09T18:44:02.430674+0000 mon.c (mon.1) 34 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T18:44:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: audit 2026-03-09T18:44:02.431313+0000 mon.c (mon.1) 35 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: audit 2026-03-09T18:44:02.431313+0000 mon.c (mon.1) 35 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: audit 2026-03-09T18:44:02.889321+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: audit 2026-03-09T18:44:02.889321+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: audit 2026-03-09T18:44:02.899824+0000 mon.a (mon.0) 73 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: audit 2026-03-09T18:44:02.899824+0000 mon.a (mon.0) 73 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: audit 2026-03-09T18:44:02.901660+0000 mon.c (mon.1) 36 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:44:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: audit 2026-03-09T18:44:02.901660+0000 mon.c (mon.1) 36 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:44:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: audit 2026-03-09T18:44:02.902521+0000 mon.c (mon.1) 37 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:44:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: audit 2026-03-09T18:44:02.902521+0000 mon.c (mon.1) 37 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:44:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: audit 2026-03-09T18:44:02.903320+0000 mon.c (mon.1) 38 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: audit 2026-03-09T18:44:02.903320+0000 mon.c (mon.1) 38 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: audit 2026-03-09T18:44:03.098062+0000 mon.c (mon.1) 39 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:44:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:03 vm08 bash[46122]: audit 2026-03-09T18:44:03.098062+0000 mon.c (mon.1) 39 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: cephadm 2026-03-09T18:44:02.430479+0000 mgr.y (mgr.44107) 34 : cephadm [INF] Reconfiguring osd.0 (monmap changed)... 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: cephadm 2026-03-09T18:44:02.430479+0000 mgr.y (mgr.44107) 34 : cephadm [INF] Reconfiguring osd.0 (monmap changed)... 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: cephadm 2026-03-09T18:44:02.432843+0000 mgr.y (mgr.44107) 35 : cephadm [INF] Reconfiguring daemon osd.0 on vm00 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: cephadm 2026-03-09T18:44:02.432843+0000 mgr.y (mgr.44107) 35 : cephadm [INF] Reconfiguring daemon osd.0 on vm00 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: cephadm 2026-03-09T18:44:02.901393+0000 mgr.y (mgr.44107) 36 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: cephadm 2026-03-09T18:44:02.901393+0000 mgr.y (mgr.44107) 36 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: cephadm 2026-03-09T18:44:02.904198+0000 mgr.y (mgr.44107) 37 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: cephadm 2026-03-09T18:44:02.904198+0000 mgr.y (mgr.44107) 37 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:03.340299+0000 mon.a (mon.0) 74 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:03.340299+0000 mon.a (mon.0) 74 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:03.350311+0000 mon.a (mon.0) 75 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:03.350311+0000 mon.a (mon.0) 75 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: cephadm 2026-03-09T18:44:03.353077+0000 mgr.y (mgr.44107) 38 : cephadm [INF] Reconfiguring osd.1 (monmap changed)... 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: cephadm 2026-03-09T18:44:03.353077+0000 mgr.y (mgr.44107) 38 : cephadm [INF] Reconfiguring osd.1 (monmap changed)... 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:03.353408+0000 mon.c (mon.1) 40 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:03.353408+0000 mon.c (mon.1) 40 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:03.354441+0000 mon.c (mon.1) 41 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:03.354441+0000 mon.c (mon.1) 41 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: cephadm 2026-03-09T18:44:03.356226+0000 mgr.y (mgr.44107) 39 : cephadm [INF] Reconfiguring daemon osd.1 on vm00 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: cephadm 2026-03-09T18:44:03.356226+0000 mgr.y (mgr.44107) 39 : cephadm [INF] Reconfiguring daemon osd.1 on vm00 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:03.813619+0000 mon.a (mon.0) 76 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:03.813619+0000 mon.a (mon.0) 76 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:03.822861+0000 mon.a (mon.0) 77 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:03.822861+0000 mon.a (mon.0) 77 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:03.824464+0000 mon.c (mon.1) 42 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:03.824464+0000 mon.c (mon.1) 42 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:03.824724+0000 mon.a (mon.0) 78 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:03.824724+0000 mon.a (mon.0) 78 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:03.825613+0000 mon.c (mon.1) 43 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:44:04.348 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:03.825613+0000 mon.c (mon.1) 43 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:44:04.349 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:03.826431+0000 mon.c (mon.1) 44 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:04.349 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:03.826431+0000 mon.c (mon.1) 44 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:04.349 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:04.230236+0000 mon.a (mon.0) 79 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.349 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:04.230236+0000 mon.a (mon.0) 79 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.349 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:04.237505+0000 mon.a (mon.0) 80 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.349 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:04.237505+0000 mon.a (mon.0) 80 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.349 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:04.240071+0000 mon.c (mon.1) 45 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:44:04.349 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:04.240071+0000 mon.c (mon.1) 45 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:44:04.349 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:04.240295+0000 mon.a (mon.0) 81 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:44:04.349 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:04.240295+0000 mon.a (mon.0) 81 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:44:04.349 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:04.242301+0000 mon.c (mon.1) 46 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:04.349 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:04 vm00 bash[69512]: audit 2026-03-09T18:44:04.242301+0000 mon.c (mon.1) 46 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:04.600 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: cephadm 2026-03-09T18:44:02.430479+0000 mgr.y (mgr.44107) 34 : cephadm [INF] Reconfiguring osd.0 (monmap changed)... 2026-03-09T18:44:04.600 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: cephadm 2026-03-09T18:44:02.430479+0000 mgr.y (mgr.44107) 34 : cephadm [INF] Reconfiguring osd.0 (monmap changed)... 2026-03-09T18:44:04.600 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: cephadm 2026-03-09T18:44:02.432843+0000 mgr.y (mgr.44107) 35 : cephadm [INF] Reconfiguring daemon osd.0 on vm00 2026-03-09T18:44:04.600 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: cephadm 2026-03-09T18:44:02.432843+0000 mgr.y (mgr.44107) 35 : cephadm [INF] Reconfiguring daemon osd.0 on vm00 2026-03-09T18:44:04.600 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: cephadm 2026-03-09T18:44:02.901393+0000 mgr.y (mgr.44107) 36 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-09T18:44:04.600 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: cephadm 2026-03-09T18:44:02.901393+0000 mgr.y (mgr.44107) 36 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-09T18:44:04.600 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: cephadm 2026-03-09T18:44:02.904198+0000 mgr.y (mgr.44107) 37 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-09T18:44:04.600 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: cephadm 2026-03-09T18:44:02.904198+0000 mgr.y (mgr.44107) 37 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-09T18:44:04.600 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:03.340299+0000 mon.a (mon.0) 74 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.600 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:03.340299+0000 mon.a (mon.0) 74 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.600 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:03.350311+0000 mon.a (mon.0) 75 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.600 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:03.350311+0000 mon.a (mon.0) 75 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.600 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: cephadm 2026-03-09T18:44:03.353077+0000 mgr.y (mgr.44107) 38 : cephadm [INF] Reconfiguring osd.1 (monmap changed)... 2026-03-09T18:44:04.600 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: cephadm 2026-03-09T18:44:03.353077+0000 mgr.y (mgr.44107) 38 : cephadm [INF] Reconfiguring osd.1 (monmap changed)... 2026-03-09T18:44:04.600 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:03.353408+0000 mon.c (mon.1) 40 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T18:44:04.600 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:03.353408+0000 mon.c (mon.1) 40 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T18:44:04.600 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:03.354441+0000 mon.c (mon.1) 41 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:04.600 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:03.354441+0000 mon.c (mon.1) 41 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:04.600 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: cephadm 2026-03-09T18:44:03.356226+0000 mgr.y (mgr.44107) 39 : cephadm [INF] Reconfiguring daemon osd.1 on vm00 2026-03-09T18:44:04.600 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: cephadm 2026-03-09T18:44:03.356226+0000 mgr.y (mgr.44107) 39 : cephadm [INF] Reconfiguring daemon osd.1 on vm00 2026-03-09T18:44:04.600 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:03.813619+0000 mon.a (mon.0) 76 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.600 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:03.813619+0000 mon.a (mon.0) 76 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.600 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:03.822861+0000 mon.a (mon.0) 77 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.600 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:03.822861+0000 mon.a (mon.0) 77 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.600 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:03.824464+0000 mon.c (mon.1) 42 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:44:04.600 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:03.824464+0000 mon.c (mon.1) 42 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:44:04.600 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:03.824724+0000 mon.a (mon.0) 78 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:44:04.601 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:03.824724+0000 mon.a (mon.0) 78 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:44:04.601 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:03.825613+0000 mon.c (mon.1) 43 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:44:04.601 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:03.825613+0000 mon.c (mon.1) 43 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:44:04.601 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:03.826431+0000 mon.c (mon.1) 44 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:04.601 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:03.826431+0000 mon.c (mon.1) 44 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:04.601 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:04.230236+0000 mon.a (mon.0) 79 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.601 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:04.230236+0000 mon.a (mon.0) 79 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.601 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:04.237505+0000 mon.a (mon.0) 80 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.601 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:04.237505+0000 mon.a (mon.0) 80 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.601 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:04.240071+0000 mon.c (mon.1) 45 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:44:04.601 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:04.240071+0000 mon.c (mon.1) 45 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:44:04.601 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:04.240295+0000 mon.a (mon.0) 81 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:44:04.601 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:04.240295+0000 mon.a (mon.0) 81 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:44:04.601 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:04.242301+0000 mon.c (mon.1) 46 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:04.601 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:04 vm00 bash[65531]: audit 2026-03-09T18:44:04.242301+0000 mon.c (mon.1) 46 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:04.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: cephadm 2026-03-09T18:44:02.430479+0000 mgr.y (mgr.44107) 34 : cephadm [INF] Reconfiguring osd.0 (monmap changed)... 2026-03-09T18:44:04.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: cephadm 2026-03-09T18:44:02.430479+0000 mgr.y (mgr.44107) 34 : cephadm [INF] Reconfiguring osd.0 (monmap changed)... 2026-03-09T18:44:04.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: cephadm 2026-03-09T18:44:02.432843+0000 mgr.y (mgr.44107) 35 : cephadm [INF] Reconfiguring daemon osd.0 on vm00 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: cephadm 2026-03-09T18:44:02.432843+0000 mgr.y (mgr.44107) 35 : cephadm [INF] Reconfiguring daemon osd.0 on vm00 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: cephadm 2026-03-09T18:44:02.901393+0000 mgr.y (mgr.44107) 36 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: cephadm 2026-03-09T18:44:02.901393+0000 mgr.y (mgr.44107) 36 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: cephadm 2026-03-09T18:44:02.904198+0000 mgr.y (mgr.44107) 37 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: cephadm 2026-03-09T18:44:02.904198+0000 mgr.y (mgr.44107) 37 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:03.340299+0000 mon.a (mon.0) 74 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:03.340299+0000 mon.a (mon.0) 74 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:03.350311+0000 mon.a (mon.0) 75 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:03.350311+0000 mon.a (mon.0) 75 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: cephadm 2026-03-09T18:44:03.353077+0000 mgr.y (mgr.44107) 38 : cephadm [INF] Reconfiguring osd.1 (monmap changed)... 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: cephadm 2026-03-09T18:44:03.353077+0000 mgr.y (mgr.44107) 38 : cephadm [INF] Reconfiguring osd.1 (monmap changed)... 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:03.353408+0000 mon.c (mon.1) 40 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:03.353408+0000 mon.c (mon.1) 40 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:03.354441+0000 mon.c (mon.1) 41 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:03.354441+0000 mon.c (mon.1) 41 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: cephadm 2026-03-09T18:44:03.356226+0000 mgr.y (mgr.44107) 39 : cephadm [INF] Reconfiguring daemon osd.1 on vm00 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: cephadm 2026-03-09T18:44:03.356226+0000 mgr.y (mgr.44107) 39 : cephadm [INF] Reconfiguring daemon osd.1 on vm00 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:03.813619+0000 mon.a (mon.0) 76 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:03.813619+0000 mon.a (mon.0) 76 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:03.822861+0000 mon.a (mon.0) 77 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:03.822861+0000 mon.a (mon.0) 77 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:03.824464+0000 mon.c (mon.1) 42 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:03.824464+0000 mon.c (mon.1) 42 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:03.824724+0000 mon.a (mon.0) 78 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:03.824724+0000 mon.a (mon.0) 78 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:03.825613+0000 mon.c (mon.1) 43 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:03.825613+0000 mon.c (mon.1) 43 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:03.826431+0000 mon.c (mon.1) 44 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:03.826431+0000 mon.c (mon.1) 44 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:04.230236+0000 mon.a (mon.0) 79 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:04.230236+0000 mon.a (mon.0) 79 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:04.237505+0000 mon.a (mon.0) 80 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:04.237505+0000 mon.a (mon.0) 80 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:04.240071+0000 mon.c (mon.1) 45 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:04.240071+0000 mon.c (mon.1) 45 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:04.240295+0000 mon.a (mon.0) 81 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:04.240295+0000 mon.a (mon.0) 81 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:04.242301+0000 mon.c (mon.1) 46 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:04 vm08 bash[46122]: audit 2026-03-09T18:44:04.242301+0000 mon.c (mon.1) 46 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:05 vm00 bash[69512]: cephadm 2026-03-09T18:44:03.823838+0000 mgr.y (mgr.44107) 40 : cephadm [INF] Reconfiguring mgr.y (monmap changed)... 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:05 vm00 bash[69512]: cephadm 2026-03-09T18:44:03.823838+0000 mgr.y (mgr.44107) 40 : cephadm [INF] Reconfiguring mgr.y (monmap changed)... 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:05 vm00 bash[69512]: cephadm 2026-03-09T18:44:03.827277+0000 mgr.y (mgr.44107) 41 : cephadm [INF] Reconfiguring daemon mgr.y on vm00 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:05 vm00 bash[69512]: cephadm 2026-03-09T18:44:03.827277+0000 mgr.y (mgr.44107) 41 : cephadm [INF] Reconfiguring daemon mgr.y on vm00 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:05 vm00 bash[69512]: cluster 2026-03-09T18:44:04.053466+0000 mgr.y (mgr.44107) 42 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:05 vm00 bash[69512]: cluster 2026-03-09T18:44:04.053466+0000 mgr.y (mgr.44107) 42 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:05 vm00 bash[69512]: cephadm 2026-03-09T18:44:04.239807+0000 mgr.y (mgr.44107) 43 : cephadm [INF] Reconfiguring rgw.foo.vm00.ygjynr (monmap changed)... 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:05 vm00 bash[69512]: cephadm 2026-03-09T18:44:04.239807+0000 mgr.y (mgr.44107) 43 : cephadm [INF] Reconfiguring rgw.foo.vm00.ygjynr (monmap changed)... 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:05 vm00 bash[69512]: cephadm 2026-03-09T18:44:04.243113+0000 mgr.y (mgr.44107) 44 : cephadm [INF] Reconfiguring daemon rgw.foo.vm00.ygjynr on vm00 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:05 vm00 bash[69512]: cephadm 2026-03-09T18:44:04.243113+0000 mgr.y (mgr.44107) 44 : cephadm [INF] Reconfiguring daemon rgw.foo.vm00.ygjynr on vm00 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:05 vm00 bash[69512]: audit 2026-03-09T18:44:04.669135+0000 mon.a (mon.0) 82 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:05 vm00 bash[69512]: audit 2026-03-09T18:44:04.669135+0000 mon.a (mon.0) 82 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:05 vm00 bash[69512]: audit 2026-03-09T18:44:04.678183+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:05 vm00 bash[69512]: audit 2026-03-09T18:44:04.678183+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:05 vm00 bash[69512]: audit 2026-03-09T18:44:04.681578+0000 mon.c (mon.1) 47 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:05 vm00 bash[69512]: audit 2026-03-09T18:44:04.681578+0000 mon.c (mon.1) 47 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:05 vm00 bash[69512]: audit 2026-03-09T18:44:04.682823+0000 mon.c (mon.1) 48 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:05 vm00 bash[69512]: audit 2026-03-09T18:44:04.682823+0000 mon.c (mon.1) 48 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:05 vm00 bash[69512]: audit 2026-03-09T18:44:05.129439+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:05 vm00 bash[69512]: audit 2026-03-09T18:44:05.129439+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:05 vm00 bash[69512]: audit 2026-03-09T18:44:05.136054+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:05 vm00 bash[69512]: audit 2026-03-09T18:44:05.136054+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:05 vm00 bash[69512]: audit 2026-03-09T18:44:05.137281+0000 mon.c (mon.1) 49 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:05 vm00 bash[69512]: audit 2026-03-09T18:44:05.137281+0000 mon.c (mon.1) 49 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:05 vm00 bash[69512]: audit 2026-03-09T18:44:05.137841+0000 mon.c (mon.1) 50 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:05 vm00 bash[69512]: audit 2026-03-09T18:44:05.137841+0000 mon.c (mon.1) 50 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:05 vm00 bash[65531]: cephadm 2026-03-09T18:44:03.823838+0000 mgr.y (mgr.44107) 40 : cephadm [INF] Reconfiguring mgr.y (monmap changed)... 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:05 vm00 bash[65531]: cephadm 2026-03-09T18:44:03.823838+0000 mgr.y (mgr.44107) 40 : cephadm [INF] Reconfiguring mgr.y (monmap changed)... 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:05 vm00 bash[65531]: cephadm 2026-03-09T18:44:03.827277+0000 mgr.y (mgr.44107) 41 : cephadm [INF] Reconfiguring daemon mgr.y on vm00 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:05 vm00 bash[65531]: cephadm 2026-03-09T18:44:03.827277+0000 mgr.y (mgr.44107) 41 : cephadm [INF] Reconfiguring daemon mgr.y on vm00 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:05 vm00 bash[65531]: cluster 2026-03-09T18:44:04.053466+0000 mgr.y (mgr.44107) 42 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:05 vm00 bash[65531]: cluster 2026-03-09T18:44:04.053466+0000 mgr.y (mgr.44107) 42 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:05 vm00 bash[65531]: cephadm 2026-03-09T18:44:04.239807+0000 mgr.y (mgr.44107) 43 : cephadm [INF] Reconfiguring rgw.foo.vm00.ygjynr (monmap changed)... 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:05 vm00 bash[65531]: cephadm 2026-03-09T18:44:04.239807+0000 mgr.y (mgr.44107) 43 : cephadm [INF] Reconfiguring rgw.foo.vm00.ygjynr (monmap changed)... 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:05 vm00 bash[65531]: cephadm 2026-03-09T18:44:04.243113+0000 mgr.y (mgr.44107) 44 : cephadm [INF] Reconfiguring daemon rgw.foo.vm00.ygjynr on vm00 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:05 vm00 bash[65531]: cephadm 2026-03-09T18:44:04.243113+0000 mgr.y (mgr.44107) 44 : cephadm [INF] Reconfiguring daemon rgw.foo.vm00.ygjynr on vm00 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:05 vm00 bash[65531]: audit 2026-03-09T18:44:04.669135+0000 mon.a (mon.0) 82 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:05 vm00 bash[65531]: audit 2026-03-09T18:44:04.669135+0000 mon.a (mon.0) 82 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:05 vm00 bash[65531]: audit 2026-03-09T18:44:04.678183+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:05 vm00 bash[65531]: audit 2026-03-09T18:44:04.678183+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:05 vm00 bash[65531]: audit 2026-03-09T18:44:04.681578+0000 mon.c (mon.1) 47 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T18:44:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:05 vm00 bash[65531]: audit 2026-03-09T18:44:04.681578+0000 mon.c (mon.1) 47 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T18:44:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:05 vm00 bash[65531]: audit 2026-03-09T18:44:04.682823+0000 mon.c (mon.1) 48 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:05 vm00 bash[65531]: audit 2026-03-09T18:44:04.682823+0000 mon.c (mon.1) 48 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:05 vm00 bash[65531]: audit 2026-03-09T18:44:05.129439+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:05 vm00 bash[65531]: audit 2026-03-09T18:44:05.129439+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:05 vm00 bash[65531]: audit 2026-03-09T18:44:05.136054+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:05 vm00 bash[65531]: audit 2026-03-09T18:44:05.136054+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:05 vm00 bash[65531]: audit 2026-03-09T18:44:05.137281+0000 mon.c (mon.1) 49 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T18:44:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:05 vm00 bash[65531]: audit 2026-03-09T18:44:05.137281+0000 mon.c (mon.1) 49 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T18:44:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:05 vm00 bash[65531]: audit 2026-03-09T18:44:05.137841+0000 mon.c (mon.1) 50 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:05.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:05 vm00 bash[65531]: audit 2026-03-09T18:44:05.137841+0000 mon.c (mon.1) 50 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:05.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:05 vm08 bash[46122]: cephadm 2026-03-09T18:44:03.823838+0000 mgr.y (mgr.44107) 40 : cephadm [INF] Reconfiguring mgr.y (monmap changed)... 2026-03-09T18:44:05.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:05 vm08 bash[46122]: cephadm 2026-03-09T18:44:03.823838+0000 mgr.y (mgr.44107) 40 : cephadm [INF] Reconfiguring mgr.y (monmap changed)... 2026-03-09T18:44:05.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:05 vm08 bash[46122]: cephadm 2026-03-09T18:44:03.827277+0000 mgr.y (mgr.44107) 41 : cephadm [INF] Reconfiguring daemon mgr.y on vm00 2026-03-09T18:44:05.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:05 vm08 bash[46122]: cephadm 2026-03-09T18:44:03.827277+0000 mgr.y (mgr.44107) 41 : cephadm [INF] Reconfiguring daemon mgr.y on vm00 2026-03-09T18:44:05.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:05 vm08 bash[46122]: cluster 2026-03-09T18:44:04.053466+0000 mgr.y (mgr.44107) 42 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T18:44:05.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:05 vm08 bash[46122]: cluster 2026-03-09T18:44:04.053466+0000 mgr.y (mgr.44107) 42 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T18:44:05.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:05 vm08 bash[46122]: cephadm 2026-03-09T18:44:04.239807+0000 mgr.y (mgr.44107) 43 : cephadm [INF] Reconfiguring rgw.foo.vm00.ygjynr (monmap changed)... 2026-03-09T18:44:05.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:05 vm08 bash[46122]: cephadm 2026-03-09T18:44:04.239807+0000 mgr.y (mgr.44107) 43 : cephadm [INF] Reconfiguring rgw.foo.vm00.ygjynr (monmap changed)... 2026-03-09T18:44:05.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:05 vm08 bash[46122]: cephadm 2026-03-09T18:44:04.243113+0000 mgr.y (mgr.44107) 44 : cephadm [INF] Reconfiguring daemon rgw.foo.vm00.ygjynr on vm00 2026-03-09T18:44:05.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:05 vm08 bash[46122]: cephadm 2026-03-09T18:44:04.243113+0000 mgr.y (mgr.44107) 44 : cephadm [INF] Reconfiguring daemon rgw.foo.vm00.ygjynr on vm00 2026-03-09T18:44:05.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:05 vm08 bash[46122]: audit 2026-03-09T18:44:04.669135+0000 mon.a (mon.0) 82 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:05.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:05 vm08 bash[46122]: audit 2026-03-09T18:44:04.669135+0000 mon.a (mon.0) 82 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:05.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:05 vm08 bash[46122]: audit 2026-03-09T18:44:04.678183+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:05.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:05 vm08 bash[46122]: audit 2026-03-09T18:44:04.678183+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:05.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:05 vm08 bash[46122]: audit 2026-03-09T18:44:04.681578+0000 mon.c (mon.1) 47 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T18:44:05.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:05 vm08 bash[46122]: audit 2026-03-09T18:44:04.681578+0000 mon.c (mon.1) 47 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T18:44:05.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:05 vm08 bash[46122]: audit 2026-03-09T18:44:04.682823+0000 mon.c (mon.1) 48 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:05.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:05 vm08 bash[46122]: audit 2026-03-09T18:44:04.682823+0000 mon.c (mon.1) 48 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:05.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:05 vm08 bash[46122]: audit 2026-03-09T18:44:05.129439+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:05.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:05 vm08 bash[46122]: audit 2026-03-09T18:44:05.129439+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:05.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:05 vm08 bash[46122]: audit 2026-03-09T18:44:05.136054+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:05.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:05 vm08 bash[46122]: audit 2026-03-09T18:44:05.136054+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:05.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:05 vm08 bash[46122]: audit 2026-03-09T18:44:05.137281+0000 mon.c (mon.1) 49 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T18:44:05.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:05 vm08 bash[46122]: audit 2026-03-09T18:44:05.137281+0000 mon.c (mon.1) 49 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T18:44:05.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:05 vm08 bash[46122]: audit 2026-03-09T18:44:05.137841+0000 mon.c (mon.1) 50 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:05.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:05 vm08 bash[46122]: audit 2026-03-09T18:44:05.137841+0000 mon.c (mon.1) 50 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:06 vm00 bash[69512]: cephadm 2026-03-09T18:44:04.681332+0000 mgr.y (mgr.44107) 45 : cephadm [INF] Reconfiguring osd.4 (monmap changed)... 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:06 vm00 bash[69512]: cephadm 2026-03-09T18:44:04.681332+0000 mgr.y (mgr.44107) 45 : cephadm [INF] Reconfiguring osd.4 (monmap changed)... 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:06 vm00 bash[69512]: cephadm 2026-03-09T18:44:04.684789+0000 mgr.y (mgr.44107) 46 : cephadm [INF] Reconfiguring daemon osd.4 on vm08 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:06 vm00 bash[69512]: cephadm 2026-03-09T18:44:04.684789+0000 mgr.y (mgr.44107) 46 : cephadm [INF] Reconfiguring daemon osd.4 on vm08 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:06 vm00 bash[69512]: cephadm 2026-03-09T18:44:05.137112+0000 mgr.y (mgr.44107) 47 : cephadm [INF] Reconfiguring osd.5 (monmap changed)... 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:06 vm00 bash[69512]: cephadm 2026-03-09T18:44:05.137112+0000 mgr.y (mgr.44107) 47 : cephadm [INF] Reconfiguring osd.5 (monmap changed)... 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:06 vm00 bash[69512]: cephadm 2026-03-09T18:44:05.139013+0000 mgr.y (mgr.44107) 48 : cephadm [INF] Reconfiguring daemon osd.5 on vm08 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:06 vm00 bash[69512]: cephadm 2026-03-09T18:44:05.139013+0000 mgr.y (mgr.44107) 48 : cephadm [INF] Reconfiguring daemon osd.5 on vm08 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:06 vm00 bash[69512]: audit 2026-03-09T18:44:05.562161+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:06 vm00 bash[69512]: audit 2026-03-09T18:44:05.562161+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:06 vm00 bash[69512]: audit 2026-03-09T18:44:05.568073+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:06 vm00 bash[69512]: audit 2026-03-09T18:44:05.568073+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:06 vm00 bash[69512]: audit 2026-03-09T18:44:05.569739+0000 mon.c (mon.1) 51 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:06 vm00 bash[69512]: audit 2026-03-09T18:44:05.569739+0000 mon.c (mon.1) 51 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:06 vm00 bash[69512]: audit 2026-03-09T18:44:05.570008+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:06 vm00 bash[69512]: audit 2026-03-09T18:44:05.570008+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:06 vm00 bash[69512]: audit 2026-03-09T18:44:05.570530+0000 mon.c (mon.1) 52 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:06 vm00 bash[69512]: audit 2026-03-09T18:44:05.570530+0000 mon.c (mon.1) 52 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:06 vm00 bash[69512]: audit 2026-03-09T18:44:05.571125+0000 mon.c (mon.1) 53 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:06 vm00 bash[69512]: audit 2026-03-09T18:44:05.571125+0000 mon.c (mon.1) 53 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:06 vm00 bash[69512]: audit 2026-03-09T18:44:05.986300+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:06 vm00 bash[69512]: audit 2026-03-09T18:44:05.986300+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:06 vm00 bash[69512]: audit 2026-03-09T18:44:05.992724+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:06 vm00 bash[69512]: audit 2026-03-09T18:44:05.992724+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:06 vm00 bash[69512]: audit 2026-03-09T18:44:05.994296+0000 mon.c (mon.1) 54 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:06 vm00 bash[69512]: audit 2026-03-09T18:44:05.994296+0000 mon.c (mon.1) 54 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:06 vm00 bash[69512]: audit 2026-03-09T18:44:05.994852+0000 mon.c (mon.1) 55 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:06 vm00 bash[69512]: audit 2026-03-09T18:44:05.994852+0000 mon.c (mon.1) 55 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:06 vm00 bash[65531]: cephadm 2026-03-09T18:44:04.681332+0000 mgr.y (mgr.44107) 45 : cephadm [INF] Reconfiguring osd.4 (monmap changed)... 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:06 vm00 bash[65531]: cephadm 2026-03-09T18:44:04.681332+0000 mgr.y (mgr.44107) 45 : cephadm [INF] Reconfiguring osd.4 (monmap changed)... 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:06 vm00 bash[65531]: cephadm 2026-03-09T18:44:04.684789+0000 mgr.y (mgr.44107) 46 : cephadm [INF] Reconfiguring daemon osd.4 on vm08 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:06 vm00 bash[65531]: cephadm 2026-03-09T18:44:04.684789+0000 mgr.y (mgr.44107) 46 : cephadm [INF] Reconfiguring daemon osd.4 on vm08 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:06 vm00 bash[65531]: cephadm 2026-03-09T18:44:05.137112+0000 mgr.y (mgr.44107) 47 : cephadm [INF] Reconfiguring osd.5 (monmap changed)... 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:06 vm00 bash[65531]: cephadm 2026-03-09T18:44:05.137112+0000 mgr.y (mgr.44107) 47 : cephadm [INF] Reconfiguring osd.5 (monmap changed)... 2026-03-09T18:44:06.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:06 vm00 bash[65531]: cephadm 2026-03-09T18:44:05.139013+0000 mgr.y (mgr.44107) 48 : cephadm [INF] Reconfiguring daemon osd.5 on vm08 2026-03-09T18:44:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:06 vm00 bash[65531]: cephadm 2026-03-09T18:44:05.139013+0000 mgr.y (mgr.44107) 48 : cephadm [INF] Reconfiguring daemon osd.5 on vm08 2026-03-09T18:44:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:06 vm00 bash[65531]: audit 2026-03-09T18:44:05.562161+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:06 vm00 bash[65531]: audit 2026-03-09T18:44:05.562161+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:06 vm00 bash[65531]: audit 2026-03-09T18:44:05.568073+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:06 vm00 bash[65531]: audit 2026-03-09T18:44:05.568073+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:06 vm00 bash[65531]: audit 2026-03-09T18:44:05.569739+0000 mon.c (mon.1) 51 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:44:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:06 vm00 bash[65531]: audit 2026-03-09T18:44:05.569739+0000 mon.c (mon.1) 51 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:44:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:06 vm00 bash[65531]: audit 2026-03-09T18:44:05.570008+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:44:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:06 vm00 bash[65531]: audit 2026-03-09T18:44:05.570008+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:44:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:06 vm00 bash[65531]: audit 2026-03-09T18:44:05.570530+0000 mon.c (mon.1) 52 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:44:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:06 vm00 bash[65531]: audit 2026-03-09T18:44:05.570530+0000 mon.c (mon.1) 52 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:44:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:06 vm00 bash[65531]: audit 2026-03-09T18:44:05.571125+0000 mon.c (mon.1) 53 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:06 vm00 bash[65531]: audit 2026-03-09T18:44:05.571125+0000 mon.c (mon.1) 53 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:06 vm00 bash[65531]: audit 2026-03-09T18:44:05.986300+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:06 vm00 bash[65531]: audit 2026-03-09T18:44:05.986300+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:06 vm00 bash[65531]: audit 2026-03-09T18:44:05.992724+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:06 vm00 bash[65531]: audit 2026-03-09T18:44:05.992724+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:06 vm00 bash[65531]: audit 2026-03-09T18:44:05.994296+0000 mon.c (mon.1) 54 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T18:44:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:06 vm00 bash[65531]: audit 2026-03-09T18:44:05.994296+0000 mon.c (mon.1) 54 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T18:44:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:06 vm00 bash[65531]: audit 2026-03-09T18:44:05.994852+0000 mon.c (mon.1) 55 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:06.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:06 vm00 bash[65531]: audit 2026-03-09T18:44:05.994852+0000 mon.c (mon.1) 55 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:06.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:06 vm08 bash[46122]: cephadm 2026-03-09T18:44:04.681332+0000 mgr.y (mgr.44107) 45 : cephadm [INF] Reconfiguring osd.4 (monmap changed)... 2026-03-09T18:44:06.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:06 vm08 bash[46122]: cephadm 2026-03-09T18:44:04.681332+0000 mgr.y (mgr.44107) 45 : cephadm [INF] Reconfiguring osd.4 (monmap changed)... 2026-03-09T18:44:06.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:06 vm08 bash[46122]: cephadm 2026-03-09T18:44:04.684789+0000 mgr.y (mgr.44107) 46 : cephadm [INF] Reconfiguring daemon osd.4 on vm08 2026-03-09T18:44:06.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:06 vm08 bash[46122]: cephadm 2026-03-09T18:44:04.684789+0000 mgr.y (mgr.44107) 46 : cephadm [INF] Reconfiguring daemon osd.4 on vm08 2026-03-09T18:44:06.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:06 vm08 bash[46122]: cephadm 2026-03-09T18:44:05.137112+0000 mgr.y (mgr.44107) 47 : cephadm [INF] Reconfiguring osd.5 (monmap changed)... 2026-03-09T18:44:06.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:06 vm08 bash[46122]: cephadm 2026-03-09T18:44:05.137112+0000 mgr.y (mgr.44107) 47 : cephadm [INF] Reconfiguring osd.5 (monmap changed)... 2026-03-09T18:44:06.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:06 vm08 bash[46122]: cephadm 2026-03-09T18:44:05.139013+0000 mgr.y (mgr.44107) 48 : cephadm [INF] Reconfiguring daemon osd.5 on vm08 2026-03-09T18:44:06.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:06 vm08 bash[46122]: cephadm 2026-03-09T18:44:05.139013+0000 mgr.y (mgr.44107) 48 : cephadm [INF] Reconfiguring daemon osd.5 on vm08 2026-03-09T18:44:06.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:06 vm08 bash[46122]: audit 2026-03-09T18:44:05.562161+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:06.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:06 vm08 bash[46122]: audit 2026-03-09T18:44:05.562161+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:06.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:06 vm08 bash[46122]: audit 2026-03-09T18:44:05.568073+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:06.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:06 vm08 bash[46122]: audit 2026-03-09T18:44:05.568073+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:06.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:06 vm08 bash[46122]: audit 2026-03-09T18:44:05.569739+0000 mon.c (mon.1) 51 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:44:06.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:06 vm08 bash[46122]: audit 2026-03-09T18:44:05.569739+0000 mon.c (mon.1) 51 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:44:06.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:06 vm08 bash[46122]: audit 2026-03-09T18:44:05.570008+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:44:06.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:06 vm08 bash[46122]: audit 2026-03-09T18:44:05.570008+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:44:06.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:06 vm08 bash[46122]: audit 2026-03-09T18:44:05.570530+0000 mon.c (mon.1) 52 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:44:06.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:06 vm08 bash[46122]: audit 2026-03-09T18:44:05.570530+0000 mon.c (mon.1) 52 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:44:06.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:06 vm08 bash[46122]: audit 2026-03-09T18:44:05.571125+0000 mon.c (mon.1) 53 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:06.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:06 vm08 bash[46122]: audit 2026-03-09T18:44:05.571125+0000 mon.c (mon.1) 53 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:06.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:06 vm08 bash[46122]: audit 2026-03-09T18:44:05.986300+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:06.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:06 vm08 bash[46122]: audit 2026-03-09T18:44:05.986300+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:06.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:06 vm08 bash[46122]: audit 2026-03-09T18:44:05.992724+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:06.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:06 vm08 bash[46122]: audit 2026-03-09T18:44:05.992724+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:06.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:06 vm08 bash[46122]: audit 2026-03-09T18:44:05.994296+0000 mon.c (mon.1) 54 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T18:44:06.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:06 vm08 bash[46122]: audit 2026-03-09T18:44:05.994296+0000 mon.c (mon.1) 54 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T18:44:06.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:06 vm08 bash[46122]: audit 2026-03-09T18:44:05.994852+0000 mon.c (mon.1) 55 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:06.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:06 vm08 bash[46122]: audit 2026-03-09T18:44:05.994852+0000 mon.c (mon.1) 55 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:07.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: cephadm 2026-03-09T18:44:05.569580+0000 mgr.y (mgr.44107) 49 : cephadm [INF] Reconfiguring mgr.x (monmap changed)... 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: cephadm 2026-03-09T18:44:05.569580+0000 mgr.y (mgr.44107) 49 : cephadm [INF] Reconfiguring mgr.x (monmap changed)... 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: cephadm 2026-03-09T18:44:05.571826+0000 mgr.y (mgr.44107) 50 : cephadm [INF] Reconfiguring daemon mgr.x on vm08 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: cephadm 2026-03-09T18:44:05.571826+0000 mgr.y (mgr.44107) 50 : cephadm [INF] Reconfiguring daemon mgr.x on vm08 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: cephadm 2026-03-09T18:44:05.994159+0000 mgr.y (mgr.44107) 51 : cephadm [INF] Reconfiguring osd.6 (monmap changed)... 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: cephadm 2026-03-09T18:44:05.994159+0000 mgr.y (mgr.44107) 51 : cephadm [INF] Reconfiguring osd.6 (monmap changed)... 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: cephadm 2026-03-09T18:44:05.996143+0000 mgr.y (mgr.44107) 52 : cephadm [INF] Reconfiguring daemon osd.6 on vm08 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: cephadm 2026-03-09T18:44:05.996143+0000 mgr.y (mgr.44107) 52 : cephadm [INF] Reconfiguring daemon osd.6 on vm08 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: cluster 2026-03-09T18:44:06.054155+0000 mgr.y (mgr.44107) 53 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: cluster 2026-03-09T18:44:06.054155+0000 mgr.y (mgr.44107) 53 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: audit 2026-03-09T18:44:06.456834+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: audit 2026-03-09T18:44:06.456834+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: audit 2026-03-09T18:44:06.464766+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: audit 2026-03-09T18:44:06.464766+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: audit 2026-03-09T18:44:06.467375+0000 mon.c (mon.1) 56 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: audit 2026-03-09T18:44:06.467375+0000 mon.c (mon.1) 56 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: audit 2026-03-09T18:44:06.468178+0000 mon.c (mon.1) 57 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: audit 2026-03-09T18:44:06.468178+0000 mon.c (mon.1) 57 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: audit 2026-03-09T18:44:06.468978+0000 mon.c (mon.1) 58 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: audit 2026-03-09T18:44:06.468978+0000 mon.c (mon.1) 58 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: audit 2026-03-09T18:44:06.871017+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: audit 2026-03-09T18:44:06.871017+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: audit 2026-03-09T18:44:06.878164+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: audit 2026-03-09T18:44:06.878164+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: audit 2026-03-09T18:44:06.880710+0000 mon.c (mon.1) 59 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: audit 2026-03-09T18:44:06.880710+0000 mon.c (mon.1) 59 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: audit 2026-03-09T18:44:06.881178+0000 mon.a (mon.0) 95 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: audit 2026-03-09T18:44:06.881178+0000 mon.a (mon.0) 95 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: audit 2026-03-09T18:44:06.883300+0000 mon.c (mon.1) 60 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: audit 2026-03-09T18:44:06.883300+0000 mon.c (mon.1) 60 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: audit 2026-03-09T18:44:07.273293+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: audit 2026-03-09T18:44:07.273293+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: audit 2026-03-09T18:44:07.279447+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: audit 2026-03-09T18:44:07.279447+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: audit 2026-03-09T18:44:07.281459+0000 mon.c (mon.1) 61 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: audit 2026-03-09T18:44:07.281459+0000 mon.c (mon.1) 61 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: audit 2026-03-09T18:44:07.282202+0000 mon.c (mon.1) 62 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:07.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:07 vm08 bash[46122]: audit 2026-03-09T18:44:07.282202+0000 mon.c (mon.1) 62 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: cephadm 2026-03-09T18:44:05.569580+0000 mgr.y (mgr.44107) 49 : cephadm [INF] Reconfiguring mgr.x (monmap changed)... 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: cephadm 2026-03-09T18:44:05.569580+0000 mgr.y (mgr.44107) 49 : cephadm [INF] Reconfiguring mgr.x (monmap changed)... 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: cephadm 2026-03-09T18:44:05.571826+0000 mgr.y (mgr.44107) 50 : cephadm [INF] Reconfiguring daemon mgr.x on vm08 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: cephadm 2026-03-09T18:44:05.571826+0000 mgr.y (mgr.44107) 50 : cephadm [INF] Reconfiguring daemon mgr.x on vm08 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: cephadm 2026-03-09T18:44:05.994159+0000 mgr.y (mgr.44107) 51 : cephadm [INF] Reconfiguring osd.6 (monmap changed)... 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: cephadm 2026-03-09T18:44:05.994159+0000 mgr.y (mgr.44107) 51 : cephadm [INF] Reconfiguring osd.6 (monmap changed)... 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: cephadm 2026-03-09T18:44:05.996143+0000 mgr.y (mgr.44107) 52 : cephadm [INF] Reconfiguring daemon osd.6 on vm08 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: cephadm 2026-03-09T18:44:05.996143+0000 mgr.y (mgr.44107) 52 : cephadm [INF] Reconfiguring daemon osd.6 on vm08 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: cluster 2026-03-09T18:44:06.054155+0000 mgr.y (mgr.44107) 53 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: cluster 2026-03-09T18:44:06.054155+0000 mgr.y (mgr.44107) 53 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: audit 2026-03-09T18:44:06.456834+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: audit 2026-03-09T18:44:06.456834+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: audit 2026-03-09T18:44:06.464766+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: audit 2026-03-09T18:44:06.464766+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: audit 2026-03-09T18:44:06.467375+0000 mon.c (mon.1) 56 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: audit 2026-03-09T18:44:06.467375+0000 mon.c (mon.1) 56 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: audit 2026-03-09T18:44:06.468178+0000 mon.c (mon.1) 57 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: audit 2026-03-09T18:44:06.468178+0000 mon.c (mon.1) 57 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: audit 2026-03-09T18:44:06.468978+0000 mon.c (mon.1) 58 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: audit 2026-03-09T18:44:06.468978+0000 mon.c (mon.1) 58 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: audit 2026-03-09T18:44:06.871017+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: audit 2026-03-09T18:44:06.871017+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: audit 2026-03-09T18:44:06.878164+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: audit 2026-03-09T18:44:06.878164+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: audit 2026-03-09T18:44:06.880710+0000 mon.c (mon.1) 59 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: audit 2026-03-09T18:44:06.880710+0000 mon.c (mon.1) 59 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: audit 2026-03-09T18:44:06.881178+0000 mon.a (mon.0) 95 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: audit 2026-03-09T18:44:06.881178+0000 mon.a (mon.0) 95 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: audit 2026-03-09T18:44:06.883300+0000 mon.c (mon.1) 60 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: audit 2026-03-09T18:44:06.883300+0000 mon.c (mon.1) 60 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: audit 2026-03-09T18:44:07.273293+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: audit 2026-03-09T18:44:07.273293+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: audit 2026-03-09T18:44:07.279447+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: audit 2026-03-09T18:44:07.279447+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: audit 2026-03-09T18:44:07.281459+0000 mon.c (mon.1) 61 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: audit 2026-03-09T18:44:07.281459+0000 mon.c (mon.1) 61 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: audit 2026-03-09T18:44:07.282202+0000 mon.c (mon.1) 62 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:07 vm00 bash[69512]: audit 2026-03-09T18:44:07.282202+0000 mon.c (mon.1) 62 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:07.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: cephadm 2026-03-09T18:44:05.569580+0000 mgr.y (mgr.44107) 49 : cephadm [INF] Reconfiguring mgr.x (monmap changed)... 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: cephadm 2026-03-09T18:44:05.569580+0000 mgr.y (mgr.44107) 49 : cephadm [INF] Reconfiguring mgr.x (monmap changed)... 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: cephadm 2026-03-09T18:44:05.571826+0000 mgr.y (mgr.44107) 50 : cephadm [INF] Reconfiguring daemon mgr.x on vm08 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: cephadm 2026-03-09T18:44:05.571826+0000 mgr.y (mgr.44107) 50 : cephadm [INF] Reconfiguring daemon mgr.x on vm08 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: cephadm 2026-03-09T18:44:05.994159+0000 mgr.y (mgr.44107) 51 : cephadm [INF] Reconfiguring osd.6 (monmap changed)... 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: cephadm 2026-03-09T18:44:05.994159+0000 mgr.y (mgr.44107) 51 : cephadm [INF] Reconfiguring osd.6 (monmap changed)... 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: cephadm 2026-03-09T18:44:05.996143+0000 mgr.y (mgr.44107) 52 : cephadm [INF] Reconfiguring daemon osd.6 on vm08 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: cephadm 2026-03-09T18:44:05.996143+0000 mgr.y (mgr.44107) 52 : cephadm [INF] Reconfiguring daemon osd.6 on vm08 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: cluster 2026-03-09T18:44:06.054155+0000 mgr.y (mgr.44107) 53 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: cluster 2026-03-09T18:44:06.054155+0000 mgr.y (mgr.44107) 53 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: audit 2026-03-09T18:44:06.456834+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: audit 2026-03-09T18:44:06.456834+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: audit 2026-03-09T18:44:06.464766+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: audit 2026-03-09T18:44:06.464766+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: audit 2026-03-09T18:44:06.467375+0000 mon.c (mon.1) 56 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: audit 2026-03-09T18:44:06.467375+0000 mon.c (mon.1) 56 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: audit 2026-03-09T18:44:06.468178+0000 mon.c (mon.1) 57 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: audit 2026-03-09T18:44:06.468178+0000 mon.c (mon.1) 57 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: audit 2026-03-09T18:44:06.468978+0000 mon.c (mon.1) 58 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: audit 2026-03-09T18:44:06.468978+0000 mon.c (mon.1) 58 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: audit 2026-03-09T18:44:06.871017+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: audit 2026-03-09T18:44:06.871017+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: audit 2026-03-09T18:44:06.878164+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: audit 2026-03-09T18:44:06.878164+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: audit 2026-03-09T18:44:06.880710+0000 mon.c (mon.1) 59 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: audit 2026-03-09T18:44:06.880710+0000 mon.c (mon.1) 59 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: audit 2026-03-09T18:44:06.881178+0000 mon.a (mon.0) 95 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: audit 2026-03-09T18:44:06.881178+0000 mon.a (mon.0) 95 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: audit 2026-03-09T18:44:06.883300+0000 mon.c (mon.1) 60 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: audit 2026-03-09T18:44:06.883300+0000 mon.c (mon.1) 60 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: audit 2026-03-09T18:44:07.273293+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: audit 2026-03-09T18:44:07.273293+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: audit 2026-03-09T18:44:07.279447+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: audit 2026-03-09T18:44:07.279447+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: audit 2026-03-09T18:44:07.281459+0000 mon.c (mon.1) 61 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: audit 2026-03-09T18:44:07.281459+0000 mon.c (mon.1) 61 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: audit 2026-03-09T18:44:07.282202+0000 mon.c (mon.1) 62 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:07.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:07 vm00 bash[65531]: audit 2026-03-09T18:44:07.282202+0000 mon.c (mon.1) 62 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: cephadm 2026-03-09T18:44:06.467057+0000 mgr.y (mgr.44107) 54 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T18:44:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: cephadm 2026-03-09T18:44:06.467057+0000 mgr.y (mgr.44107) 54 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T18:44:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: cephadm 2026-03-09T18:44:06.469682+0000 mgr.y (mgr.44107) 55 : cephadm [INF] Reconfiguring daemon mon.b on vm08 2026-03-09T18:44:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: cephadm 2026-03-09T18:44:06.469682+0000 mgr.y (mgr.44107) 55 : cephadm [INF] Reconfiguring daemon mon.b on vm08 2026-03-09T18:44:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: cephadm 2026-03-09T18:44:06.880367+0000 mgr.y (mgr.44107) 56 : cephadm [INF] Reconfiguring rgw.foo.vm08.rcuedn (monmap changed)... 2026-03-09T18:44:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: cephadm 2026-03-09T18:44:06.880367+0000 mgr.y (mgr.44107) 56 : cephadm [INF] Reconfiguring rgw.foo.vm08.rcuedn (monmap changed)... 2026-03-09T18:44:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: cephadm 2026-03-09T18:44:06.883917+0000 mgr.y (mgr.44107) 57 : cephadm [INF] Reconfiguring daemon rgw.foo.vm08.rcuedn on vm08 2026-03-09T18:44:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: cephadm 2026-03-09T18:44:06.883917+0000 mgr.y (mgr.44107) 57 : cephadm [INF] Reconfiguring daemon rgw.foo.vm08.rcuedn on vm08 2026-03-09T18:44:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.281135+0000 mgr.y (mgr.44107) 58 : cephadm [INF] Reconfiguring osd.7 (monmap changed)... 2026-03-09T18:44:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.281135+0000 mgr.y (mgr.44107) 58 : cephadm [INF] Reconfiguring osd.7 (monmap changed)... 2026-03-09T18:44:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.283466+0000 mgr.y (mgr.44107) 59 : cephadm [INF] Reconfiguring daemon osd.7 on vm08 2026-03-09T18:44:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.283466+0000 mgr.y (mgr.44107) 59 : cephadm [INF] Reconfiguring daemon osd.7 on vm08 2026-03-09T18:44:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.701623+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.701623+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.707684+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.707684+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.737875+0000 mon.c (mon.1) 63 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.737875+0000 mon.c (mon.1) 63 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.739605+0000 mon.c (mon.1) 64 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.739605+0000 mon.c (mon.1) 64 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.740664+0000 mon.c (mon.1) 65 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.740664+0000 mon.c (mon.1) 65 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.746060+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.746060+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.747286+0000 mon.c (mon.1) 66 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.747286+0000 mon.c (mon.1) 66 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.747787+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.747787+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.750272+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]': finished 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.750272+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]': finished 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.752882+0000 mon.c (mon.1) 67 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.752882+0000 mon.c (mon.1) 67 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.753290+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.753290+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.755910+0000 mon.a (mon.0) 104 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]': finished 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.755910+0000 mon.a (mon.0) 104 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]': finished 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.758443+0000 mon.c (mon.1) 68 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.758443+0000 mon.c (mon.1) 68 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.762200+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.762200+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.765216+0000 mon.c (mon.1) 69 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.765216+0000 mon.c (mon.1) 69 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.768994+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.768994+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.771665+0000 mon.c (mon.1) 70 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.771665+0000 mon.c (mon.1) 70 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.775636+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.775636+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.778212+0000 mon.c (mon.1) 71 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.778212+0000 mon.c (mon.1) 71 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.780092+0000 mon.c (mon.1) 72 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.780092+0000 mon.c (mon.1) 72 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.780300+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.780300+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.781334+0000 mon.c (mon.1) 73 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.781334+0000 mon.c (mon.1) 73 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.781538+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.781538+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.782719+0000 mon.c (mon.1) 74 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.782719+0000 mon.c (mon.1) 74 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.786356+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.786356+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.789333+0000 mon.c (mon.1) 75 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.789333+0000 mon.c (mon.1) 75 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.793087+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.793087+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.796064+0000 mon.c (mon.1) 76 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.796064+0000 mon.c (mon.1) 76 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.799952+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.799952+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.802107+0000 mon.c (mon.1) 77 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.802107+0000 mon.c (mon.1) 77 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.803183+0000 mon.c (mon.1) 78 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.803183+0000 mon.c (mon.1) 78 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.803311+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.803311+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.803831+0000 mon.c (mon.1) 79 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.803831+0000 mon.c (mon.1) 79 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.804004+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.804004+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.804673+0000 mon.c (mon.1) 80 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.804673+0000 mon.c (mon.1) 80 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.805745+0000 mon.c (mon.1) 81 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.805745+0000 mon.c (mon.1) 81 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.805866+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.805866+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.806368+0000 mon.c (mon.1) 82 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.806368+0000 mon.c (mon.1) 82 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.806551+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.806551+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.807228+0000 mon.c (mon.1) 83 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.807228+0000 mon.c (mon.1) 83 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.808327+0000 mon.c (mon.1) 84 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.808327+0000 mon.c (mon.1) 84 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.808496+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.808496+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.808926+0000 mon.c (mon.1) 85 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.808926+0000 mon.c (mon.1) 85 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.809058+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.809058+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.809804+0000 mon.c (mon.1) 86 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.809804+0000 mon.c (mon.1) 86 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.810872+0000 mon.c (mon.1) 87 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.810872+0000 mon.c (mon.1) 87 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.811046+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.811046+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.811497+0000 mon.c (mon.1) 88 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.811497+0000 mon.c (mon.1) 88 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.811666+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.811666+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.812252+0000 mon.c (mon.1) 89 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.812252+0000 mon.c (mon.1) 89 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.813307+0000 mon.c (mon.1) 90 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.813307+0000 mon.c (mon.1) 90 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.813483+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.813483+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.813919+0000 mon.c (mon.1) 91 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.813919+0000 mon.c (mon.1) 91 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.814129+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.814129+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.814841+0000 mon.c (mon.1) 92 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.814841+0000 mon.c (mon.1) 92 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.815908+0000 mon.c (mon.1) 93 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.815908+0000 mon.c (mon.1) 93 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.816081+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.816081+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.816516+0000 mon.c (mon.1) 94 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.816516+0000 mon.c (mon.1) 94 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.816634+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.816634+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.817701+0000 mon.c (mon.1) 95 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.817701+0000 mon.c (mon.1) 95 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.817838+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.817838+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.820547+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.820547+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:44:08.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.823350+0000 mon.c (mon.1) 96 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.823350+0000 mon.c (mon.1) 96 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.823528+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.823528+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.826178+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.826178+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.828798+0000 mon.c (mon.1) 97 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.828798+0000 mon.c (mon.1) 97 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.828917+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.828917+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.831496+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.831496+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.847278+0000 mon.c (mon.1) 98 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.847278+0000 mon.c (mon.1) 98 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.847407+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.847407+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.847921+0000 mon.c (mon.1) 99 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.847921+0000 mon.c (mon.1) 99 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.848094+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.848094+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: cephadm 2026-03-09T18:44:06.467057+0000 mgr.y (mgr.44107) 54 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: cephadm 2026-03-09T18:44:06.467057+0000 mgr.y (mgr.44107) 54 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: cephadm 2026-03-09T18:44:06.469682+0000 mgr.y (mgr.44107) 55 : cephadm [INF] Reconfiguring daemon mon.b on vm08 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: cephadm 2026-03-09T18:44:06.469682+0000 mgr.y (mgr.44107) 55 : cephadm [INF] Reconfiguring daemon mon.b on vm08 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: cephadm 2026-03-09T18:44:06.880367+0000 mgr.y (mgr.44107) 56 : cephadm [INF] Reconfiguring rgw.foo.vm08.rcuedn (monmap changed)... 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: cephadm 2026-03-09T18:44:06.880367+0000 mgr.y (mgr.44107) 56 : cephadm [INF] Reconfiguring rgw.foo.vm08.rcuedn (monmap changed)... 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: cephadm 2026-03-09T18:44:06.883917+0000 mgr.y (mgr.44107) 57 : cephadm [INF] Reconfiguring daemon rgw.foo.vm08.rcuedn on vm08 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: cephadm 2026-03-09T18:44:06.883917+0000 mgr.y (mgr.44107) 57 : cephadm [INF] Reconfiguring daemon rgw.foo.vm08.rcuedn on vm08 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.281135+0000 mgr.y (mgr.44107) 58 : cephadm [INF] Reconfiguring osd.7 (monmap changed)... 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.281135+0000 mgr.y (mgr.44107) 58 : cephadm [INF] Reconfiguring osd.7 (monmap changed)... 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.283466+0000 mgr.y (mgr.44107) 59 : cephadm [INF] Reconfiguring daemon osd.7 on vm08 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.283466+0000 mgr.y (mgr.44107) 59 : cephadm [INF] Reconfiguring daemon osd.7 on vm08 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.701623+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.701623+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.707684+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.707684+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.737875+0000 mon.c (mon.1) 63 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.737875+0000 mon.c (mon.1) 63 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.739605+0000 mon.c (mon.1) 64 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.739605+0000 mon.c (mon.1) 64 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.740664+0000 mon.c (mon.1) 65 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.740664+0000 mon.c (mon.1) 65 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.746060+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.746060+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.747286+0000 mon.c (mon.1) 66 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.747286+0000 mon.c (mon.1) 66 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.747787+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.747787+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.750272+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]': finished 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.750272+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]': finished 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.752882+0000 mon.c (mon.1) 67 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.752882+0000 mon.c (mon.1) 67 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.753290+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.753290+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.755910+0000 mon.a (mon.0) 104 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]': finished 2026-03-09T18:44:08.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.755910+0000 mon.a (mon.0) 104 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]': finished 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.758443+0000 mon.c (mon.1) 68 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.758443+0000 mon.c (mon.1) 68 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.762200+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.762200+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.765216+0000 mon.c (mon.1) 69 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.765216+0000 mon.c (mon.1) 69 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.768994+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.768994+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.771665+0000 mon.c (mon.1) 70 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.771665+0000 mon.c (mon.1) 70 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.775636+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.775636+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.778212+0000 mon.c (mon.1) 71 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.778212+0000 mon.c (mon.1) 71 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.780092+0000 mon.c (mon.1) 72 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.780092+0000 mon.c (mon.1) 72 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.780300+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.780300+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.781334+0000 mon.c (mon.1) 73 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.781334+0000 mon.c (mon.1) 73 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.781538+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.781538+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.782719+0000 mon.c (mon.1) 74 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.782719+0000 mon.c (mon.1) 74 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.786356+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.786356+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.789333+0000 mon.c (mon.1) 75 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.789333+0000 mon.c (mon.1) 75 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.793087+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.793087+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.796064+0000 mon.c (mon.1) 76 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.796064+0000 mon.c (mon.1) 76 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.799952+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.799952+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.802107+0000 mon.c (mon.1) 77 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.802107+0000 mon.c (mon.1) 77 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.803183+0000 mon.c (mon.1) 78 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.803183+0000 mon.c (mon.1) 78 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.803311+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.803311+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.803831+0000 mon.c (mon.1) 79 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.803831+0000 mon.c (mon.1) 79 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.804004+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.804004+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.804673+0000 mon.c (mon.1) 80 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.804673+0000 mon.c (mon.1) 80 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.805745+0000 mon.c (mon.1) 81 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.805745+0000 mon.c (mon.1) 81 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.805866+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.805866+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.806368+0000 mon.c (mon.1) 82 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.806368+0000 mon.c (mon.1) 82 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.806551+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.806551+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.807228+0000 mon.c (mon.1) 83 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.807228+0000 mon.c (mon.1) 83 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.808327+0000 mon.c (mon.1) 84 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.808327+0000 mon.c (mon.1) 84 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.808496+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.808496+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.808926+0000 mon.c (mon.1) 85 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.808926+0000 mon.c (mon.1) 85 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.809058+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.809058+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.809804+0000 mon.c (mon.1) 86 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.809804+0000 mon.c (mon.1) 86 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.810872+0000 mon.c (mon.1) 87 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.810872+0000 mon.c (mon.1) 87 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.811046+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.811046+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.811497+0000 mon.c (mon.1) 88 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.811497+0000 mon.c (mon.1) 88 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.811666+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.811666+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.812252+0000 mon.c (mon.1) 89 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.812252+0000 mon.c (mon.1) 89 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.813307+0000 mon.c (mon.1) 90 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.813307+0000 mon.c (mon.1) 90 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.813483+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.813483+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.813919+0000 mon.c (mon.1) 91 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.813919+0000 mon.c (mon.1) 91 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.814129+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.814129+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.814841+0000 mon.c (mon.1) 92 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.814841+0000 mon.c (mon.1) 92 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.815908+0000 mon.c (mon.1) 93 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.815908+0000 mon.c (mon.1) 93 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.816081+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.816081+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.816516+0000 mon.c (mon.1) 94 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.816516+0000 mon.c (mon.1) 94 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.816634+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.816634+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.817701+0000 mon.c (mon.1) 95 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.817701+0000 mon.c (mon.1) 95 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.817838+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.817838+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.820547+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.820547+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.823350+0000 mon.c (mon.1) 96 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.823350+0000 mon.c (mon.1) 96 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.823528+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.823528+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.826178+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.826178+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.828798+0000 mon.c (mon.1) 97 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.828798+0000 mon.c (mon.1) 97 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.828917+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.828917+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.831496+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.831496+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.847278+0000 mon.c (mon.1) 98 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.847278+0000 mon.c (mon.1) 98 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.847407+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.847407+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.847921+0000 mon.c (mon.1) 99 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.847921+0000 mon.c (mon.1) 99 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.848094+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:44:08.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.848094+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.852070+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.852070+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.853399+0000 mon.c (mon.1) 100 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.853399+0000 mon.c (mon.1) 100 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.853576+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.853576+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.854032+0000 mon.c (mon.1) 101 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.854032+0000 mon.c (mon.1) 101 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.854223+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.854223+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.858268+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.858268+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.859496+0000 mon.c (mon.1) 102 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.859496+0000 mon.c (mon.1) 102 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.859696+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.859696+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.860120+0000 mon.c (mon.1) 103 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.860120+0000 mon.c (mon.1) 103 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.860260+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.860260+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.863993+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.863993+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.865233+0000 mon.c (mon.1) 104 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.865233+0000 mon.c (mon.1) 104 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.865447+0000 mon.a (mon.0) 140 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.865447+0000 mon.a (mon.0) 140 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.865883+0000 mon.c (mon.1) 105 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.865883+0000 mon.c (mon.1) 105 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.865998+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.865998+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.869656+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.869656+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.870590+0000 mon.c (mon.1) 106 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.870590+0000 mon.c (mon.1) 106 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.870715+0000 mon.a (mon.0) 143 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.870715+0000 mon.a (mon.0) 143 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.872988+0000 mon.a (mon.0) 144 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.872988+0000 mon.a (mon.0) 144 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.875536+0000 mon.c (mon.1) 107 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.875536+0000 mon.c (mon.1) 107 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.875667+0000 mon.a (mon.0) 145 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.875667+0000 mon.a (mon.0) 145 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.876103+0000 mon.c (mon.1) 108 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.876103+0000 mon.c (mon.1) 108 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.876302+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.876302+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.876740+0000 mon.c (mon.1) 109 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.876740+0000 mon.c (mon.1) 109 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.876905+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.876905+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.877372+0000 mon.c (mon.1) 110 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.877372+0000 mon.c (mon.1) 110 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.877543+0000 mon.a (mon.0) 148 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.877543+0000 mon.a (mon.0) 148 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.877982+0000 mon.c (mon.1) 111 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.877982+0000 mon.c (mon.1) 111 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.878152+0000 mon.a (mon.0) 149 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.878152+0000 mon.a (mon.0) 149 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.878601+0000 mon.c (mon.1) 112 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.878601+0000 mon.c (mon.1) 112 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.878798+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.878798+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.879402+0000 mon.c (mon.1) 113 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.879402+0000 mon.c (mon.1) 113 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.879573+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.879573+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.881786+0000 mon.a (mon.0) 152 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.881786+0000 mon.a (mon.0) 152 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.884133+0000 mon.c (mon.1) 114 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.884133+0000 mon.c (mon.1) 114 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.885205+0000 mon.c (mon.1) 115 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.885205+0000 mon.c (mon.1) 115 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.885654+0000 mon.c (mon.1) 116 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.885654+0000 mon.c (mon.1) 116 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.889571+0000 mon.a (mon.0) 153 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.889571+0000 mon.a (mon.0) 153 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.934261+0000 mon.c (mon.1) 117 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.934261+0000 mon.c (mon.1) 117 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.935301+0000 mon.c (mon.1) 118 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.935301+0000 mon.c (mon.1) 118 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.935713+0000 mon.c (mon.1) 119 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.935713+0000 mon.c (mon.1) 119 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.939934+0000 mon.a (mon.0) 154 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:07.939934+0000 mon.a (mon.0) 154 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:08.096608+0000 mon.a (mon.0) 155 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:08 vm00 bash[69512]: audit 2026-03-09T18:44:08.096608+0000 mon.a (mon.0) 155 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.852070+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.852070+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.853399+0000 mon.c (mon.1) 100 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.853399+0000 mon.c (mon.1) 100 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.853576+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.853576+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.854032+0000 mon.c (mon.1) 101 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.854032+0000 mon.c (mon.1) 101 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.854223+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.854223+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.858268+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.858268+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.859496+0000 mon.c (mon.1) 102 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.859496+0000 mon.c (mon.1) 102 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.859696+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.859696+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.860120+0000 mon.c (mon.1) 103 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.860120+0000 mon.c (mon.1) 103 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.860260+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.860260+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.863993+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.863993+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.865233+0000 mon.c (mon.1) 104 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.865233+0000 mon.c (mon.1) 104 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.865447+0000 mon.a (mon.0) 140 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.865447+0000 mon.a (mon.0) 140 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:44:08.637 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.865883+0000 mon.c (mon.1) 105 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.865883+0000 mon.c (mon.1) 105 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.865998+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.865998+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.869656+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.869656+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.870590+0000 mon.c (mon.1) 106 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.870590+0000 mon.c (mon.1) 106 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.870715+0000 mon.a (mon.0) 143 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.870715+0000 mon.a (mon.0) 143 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.872988+0000 mon.a (mon.0) 144 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.872988+0000 mon.a (mon.0) 144 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.875536+0000 mon.c (mon.1) 107 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.875536+0000 mon.c (mon.1) 107 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.875667+0000 mon.a (mon.0) 145 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.875667+0000 mon.a (mon.0) 145 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.876103+0000 mon.c (mon.1) 108 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.876103+0000 mon.c (mon.1) 108 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.876302+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.876302+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.876740+0000 mon.c (mon.1) 109 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.876740+0000 mon.c (mon.1) 109 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.876905+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.876905+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.877372+0000 mon.c (mon.1) 110 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.877372+0000 mon.c (mon.1) 110 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.877543+0000 mon.a (mon.0) 148 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.877543+0000 mon.a (mon.0) 148 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.877982+0000 mon.c (mon.1) 111 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.877982+0000 mon.c (mon.1) 111 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.878152+0000 mon.a (mon.0) 149 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.878152+0000 mon.a (mon.0) 149 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.878601+0000 mon.c (mon.1) 112 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.878601+0000 mon.c (mon.1) 112 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.878798+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.878798+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.879402+0000 mon.c (mon.1) 113 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.879402+0000 mon.c (mon.1) 113 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.879573+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.879573+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.881786+0000 mon.a (mon.0) 152 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.881786+0000 mon.a (mon.0) 152 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.884133+0000 mon.c (mon.1) 114 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.884133+0000 mon.c (mon.1) 114 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.885205+0000 mon.c (mon.1) 115 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.885205+0000 mon.c (mon.1) 115 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.885654+0000 mon.c (mon.1) 116 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.885654+0000 mon.c (mon.1) 116 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.889571+0000 mon.a (mon.0) 153 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.889571+0000 mon.a (mon.0) 153 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.934261+0000 mon.c (mon.1) 117 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.934261+0000 mon.c (mon.1) 117 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.935301+0000 mon.c (mon.1) 118 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.935301+0000 mon.c (mon.1) 118 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.935713+0000 mon.c (mon.1) 119 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:08.638 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.935713+0000 mon.c (mon.1) 119 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:08.639 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.939934+0000 mon.a (mon.0) 154 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.639 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:07.939934+0000 mon.a (mon.0) 154 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.639 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:08.096608+0000 mon.a (mon.0) 155 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.639 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:08 vm00 bash[65531]: audit 2026-03-09T18:44:08.096608+0000 mon.a (mon.0) 155 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: cephadm 2026-03-09T18:44:06.467057+0000 mgr.y (mgr.44107) 54 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: cephadm 2026-03-09T18:44:06.467057+0000 mgr.y (mgr.44107) 54 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: cephadm 2026-03-09T18:44:06.469682+0000 mgr.y (mgr.44107) 55 : cephadm [INF] Reconfiguring daemon mon.b on vm08 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: cephadm 2026-03-09T18:44:06.469682+0000 mgr.y (mgr.44107) 55 : cephadm [INF] Reconfiguring daemon mon.b on vm08 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: cephadm 2026-03-09T18:44:06.880367+0000 mgr.y (mgr.44107) 56 : cephadm [INF] Reconfiguring rgw.foo.vm08.rcuedn (monmap changed)... 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: cephadm 2026-03-09T18:44:06.880367+0000 mgr.y (mgr.44107) 56 : cephadm [INF] Reconfiguring rgw.foo.vm08.rcuedn (monmap changed)... 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: cephadm 2026-03-09T18:44:06.883917+0000 mgr.y (mgr.44107) 57 : cephadm [INF] Reconfiguring daemon rgw.foo.vm08.rcuedn on vm08 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: cephadm 2026-03-09T18:44:06.883917+0000 mgr.y (mgr.44107) 57 : cephadm [INF] Reconfiguring daemon rgw.foo.vm08.rcuedn on vm08 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.281135+0000 mgr.y (mgr.44107) 58 : cephadm [INF] Reconfiguring osd.7 (monmap changed)... 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.281135+0000 mgr.y (mgr.44107) 58 : cephadm [INF] Reconfiguring osd.7 (monmap changed)... 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.283466+0000 mgr.y (mgr.44107) 59 : cephadm [INF] Reconfiguring daemon osd.7 on vm08 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.283466+0000 mgr.y (mgr.44107) 59 : cephadm [INF] Reconfiguring daemon osd.7 on vm08 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.701623+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.701623+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.707684+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.707684+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.737875+0000 mon.c (mon.1) 63 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.737875+0000 mon.c (mon.1) 63 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.739605+0000 mon.c (mon.1) 64 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.739605+0000 mon.c (mon.1) 64 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.740664+0000 mon.c (mon.1) 65 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.740664+0000 mon.c (mon.1) 65 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.746060+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.746060+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.747286+0000 mon.c (mon.1) 66 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.747286+0000 mon.c (mon.1) 66 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.747787+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.747787+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.750272+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]': finished 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.750272+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]': finished 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.752882+0000 mon.c (mon.1) 67 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.752882+0000 mon.c (mon.1) 67 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.753290+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.753290+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.755910+0000 mon.a (mon.0) 104 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]': finished 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.755910+0000 mon.a (mon.0) 104 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]': finished 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.758443+0000 mon.c (mon.1) 68 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.758443+0000 mon.c (mon.1) 68 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.762200+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.762200+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.765216+0000 mon.c (mon.1) 69 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.765216+0000 mon.c (mon.1) 69 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.768994+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.768994+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.771665+0000 mon.c (mon.1) 70 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.771665+0000 mon.c (mon.1) 70 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.775636+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.775636+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.778212+0000 mon.c (mon.1) 71 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.778212+0000 mon.c (mon.1) 71 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.780092+0000 mon.c (mon.1) 72 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.780092+0000 mon.c (mon.1) 72 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.780300+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.780300+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.781334+0000 mon.c (mon.1) 73 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.781334+0000 mon.c (mon.1) 73 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.781538+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.781538+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.782719+0000 mon.c (mon.1) 74 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.782719+0000 mon.c (mon.1) 74 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.786356+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.786356+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.789333+0000 mon.c (mon.1) 75 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.789333+0000 mon.c (mon.1) 75 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.793087+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.793087+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.796064+0000 mon.c (mon.1) 76 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.796064+0000 mon.c (mon.1) 76 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.799952+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.799952+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.802107+0000 mon.c (mon.1) 77 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.802107+0000 mon.c (mon.1) 77 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.803183+0000 mon.c (mon.1) 78 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.803183+0000 mon.c (mon.1) 78 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.803311+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.803311+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.803831+0000 mon.c (mon.1) 79 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.803831+0000 mon.c (mon.1) 79 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.804004+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.804004+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.804673+0000 mon.c (mon.1) 80 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.804673+0000 mon.c (mon.1) 80 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.805745+0000 mon.c (mon.1) 81 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.805745+0000 mon.c (mon.1) 81 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.805866+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.805866+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.806368+0000 mon.c (mon.1) 82 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.806368+0000 mon.c (mon.1) 82 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.806551+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.806551+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.807228+0000 mon.c (mon.1) 83 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.807228+0000 mon.c (mon.1) 83 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.808327+0000 mon.c (mon.1) 84 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.808327+0000 mon.c (mon.1) 84 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.808496+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.808496+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.808926+0000 mon.c (mon.1) 85 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.808926+0000 mon.c (mon.1) 85 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.809058+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.809058+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.809804+0000 mon.c (mon.1) 86 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.809804+0000 mon.c (mon.1) 86 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.810872+0000 mon.c (mon.1) 87 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.810872+0000 mon.c (mon.1) 87 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.811046+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.811046+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.811497+0000 mon.c (mon.1) 88 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.811497+0000 mon.c (mon.1) 88 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.811666+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.811666+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.812252+0000 mon.c (mon.1) 89 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.812252+0000 mon.c (mon.1) 89 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.813307+0000 mon.c (mon.1) 90 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.813307+0000 mon.c (mon.1) 90 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.813483+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.813483+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.813919+0000 mon.c (mon.1) 91 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.813919+0000 mon.c (mon.1) 91 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.814129+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.814129+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.814841+0000 mon.c (mon.1) 92 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.814841+0000 mon.c (mon.1) 92 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.815908+0000 mon.c (mon.1) 93 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.815908+0000 mon.c (mon.1) 93 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.816081+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.816081+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.816516+0000 mon.c (mon.1) 94 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.816516+0000 mon.c (mon.1) 94 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.816634+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.816634+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.817701+0000 mon.c (mon.1) 95 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.817701+0000 mon.c (mon.1) 95 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.817838+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.817838+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.820547+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.820547+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.823350+0000 mon.c (mon.1) 96 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.823350+0000 mon.c (mon.1) 96 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.823528+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.823528+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.826178+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.826178+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.828798+0000 mon.c (mon.1) 97 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.828798+0000 mon.c (mon.1) 97 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.828917+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.828917+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.831496+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.831496+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.847278+0000 mon.c (mon.1) 98 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.847278+0000 mon.c (mon.1) 98 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.847407+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.847407+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.847921+0000 mon.c (mon.1) 99 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.847921+0000 mon.c (mon.1) 99 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:44:08.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.848094+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.848094+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.852070+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.852070+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.853399+0000 mon.c (mon.1) 100 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.853399+0000 mon.c (mon.1) 100 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.853576+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.853576+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.854032+0000 mon.c (mon.1) 101 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.854032+0000 mon.c (mon.1) 101 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.854223+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.854223+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.858268+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.858268+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.859496+0000 mon.c (mon.1) 102 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.859496+0000 mon.c (mon.1) 102 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.859696+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.859696+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.860120+0000 mon.c (mon.1) 103 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.860120+0000 mon.c (mon.1) 103 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.860260+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.860260+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.863993+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.863993+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.865233+0000 mon.c (mon.1) 104 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.865233+0000 mon.c (mon.1) 104 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.865447+0000 mon.a (mon.0) 140 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.865447+0000 mon.a (mon.0) 140 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.865883+0000 mon.c (mon.1) 105 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.865883+0000 mon.c (mon.1) 105 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.865998+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.865998+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.869656+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.869656+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.870590+0000 mon.c (mon.1) 106 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.870590+0000 mon.c (mon.1) 106 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.870715+0000 mon.a (mon.0) 143 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.870715+0000 mon.a (mon.0) 143 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.872988+0000 mon.a (mon.0) 144 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.872988+0000 mon.a (mon.0) 144 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.875536+0000 mon.c (mon.1) 107 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.875536+0000 mon.c (mon.1) 107 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.875667+0000 mon.a (mon.0) 145 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.875667+0000 mon.a (mon.0) 145 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.876103+0000 mon.c (mon.1) 108 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.876103+0000 mon.c (mon.1) 108 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.876302+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.876302+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.876740+0000 mon.c (mon.1) 109 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.876740+0000 mon.c (mon.1) 109 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.876905+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.876905+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.877372+0000 mon.c (mon.1) 110 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.877372+0000 mon.c (mon.1) 110 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.877543+0000 mon.a (mon.0) 148 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.877543+0000 mon.a (mon.0) 148 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.877982+0000 mon.c (mon.1) 111 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.877982+0000 mon.c (mon.1) 111 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.878152+0000 mon.a (mon.0) 149 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.878152+0000 mon.a (mon.0) 149 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.878601+0000 mon.c (mon.1) 112 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.878601+0000 mon.c (mon.1) 112 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.878798+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.878798+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.879402+0000 mon.c (mon.1) 113 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.879402+0000 mon.c (mon.1) 113 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.879573+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.879573+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.881786+0000 mon.a (mon.0) 152 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.881786+0000 mon.a (mon.0) 152 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.884133+0000 mon.c (mon.1) 114 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.884133+0000 mon.c (mon.1) 114 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.885205+0000 mon.c (mon.1) 115 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.885205+0000 mon.c (mon.1) 115 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.885654+0000 mon.c (mon.1) 116 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.885654+0000 mon.c (mon.1) 116 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.889571+0000 mon.a (mon.0) 153 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.889571+0000 mon.a (mon.0) 153 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.934261+0000 mon.c (mon.1) 117 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.934261+0000 mon.c (mon.1) 117 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.935301+0000 mon.c (mon.1) 118 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.935301+0000 mon.c (mon.1) 118 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.935713+0000 mon.c (mon.1) 119 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.935713+0000 mon.c (mon.1) 119 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.939934+0000 mon.a (mon.0) 154 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:07.939934+0000 mon.a (mon.0) 154 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:08.096608+0000 mon.a (mon.0) 155 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:08.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:08 vm08 bash[46122]: audit 2026-03-09T18:44:08.096608+0000 mon.a (mon.0) 155 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:09.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.741363+0000 mgr.y (mgr.44107) 60 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.741363+0000 mgr.y (mgr.44107) 60 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.759247+0000 mgr.y (mgr.44107) 61 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.759247+0000 mgr.y (mgr.44107) 61 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.766162+0000 mgr.y (mgr.44107) 62 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.766162+0000 mgr.y (mgr.44107) 62 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.772466+0000 mgr.y (mgr.44107) 63 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.772466+0000 mgr.y (mgr.44107) 63 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.779034+0000 mgr.y (mgr.44107) 64 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.779034+0000 mgr.y (mgr.44107) 64 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.783503+0000 mgr.y (mgr.44107) 65 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.783503+0000 mgr.y (mgr.44107) 65 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.790117+0000 mgr.y (mgr.44107) 66 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.790117+0000 mgr.y (mgr.44107) 66 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.796867+0000 mgr.y (mgr.44107) 67 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.796867+0000 mgr.y (mgr.44107) 67 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.802513+0000 mgr.y (mgr.44107) 68 : cephadm [INF] Upgrade: Setting container_image for all node-exporter 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.802513+0000 mgr.y (mgr.44107) 68 : cephadm [INF] Upgrade: Setting container_image for all node-exporter 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.805071+0000 mgr.y (mgr.44107) 69 : cephadm [INF] Upgrade: Setting container_image for all prometheus 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.805071+0000 mgr.y (mgr.44107) 69 : cephadm [INF] Upgrade: Setting container_image for all prometheus 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.807645+0000 mgr.y (mgr.44107) 70 : cephadm [INF] Upgrade: Setting container_image for all alertmanager 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.807645+0000 mgr.y (mgr.44107) 70 : cephadm [INF] Upgrade: Setting container_image for all alertmanager 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.810237+0000 mgr.y (mgr.44107) 71 : cephadm [INF] Upgrade: Setting container_image for all grafana 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.810237+0000 mgr.y (mgr.44107) 71 : cephadm [INF] Upgrade: Setting container_image for all grafana 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.812649+0000 mgr.y (mgr.44107) 72 : cephadm [INF] Upgrade: Setting container_image for all loki 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.812649+0000 mgr.y (mgr.44107) 72 : cephadm [INF] Upgrade: Setting container_image for all loki 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.815232+0000 mgr.y (mgr.44107) 73 : cephadm [INF] Upgrade: Setting container_image for all promtail 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.815232+0000 mgr.y (mgr.44107) 73 : cephadm [INF] Upgrade: Setting container_image for all promtail 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.817073+0000 mgr.y (mgr.44107) 74 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.817073+0000 mgr.y (mgr.44107) 74 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.879173+0000 mgr.y (mgr.44107) 75 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cephadm 2026-03-09T18:44:07.879173+0000 mgr.y (mgr.44107) 75 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cluster 2026-03-09T18:44:08.054513+0000 mgr.y (mgr.44107) 76 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:09 vm00 bash[65531]: cluster 2026-03-09T18:44:08.054513+0000 mgr.y (mgr.44107) 76 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.741363+0000 mgr.y (mgr.44107) 60 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.741363+0000 mgr.y (mgr.44107) 60 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.759247+0000 mgr.y (mgr.44107) 61 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.759247+0000 mgr.y (mgr.44107) 61 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.766162+0000 mgr.y (mgr.44107) 62 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.766162+0000 mgr.y (mgr.44107) 62 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.772466+0000 mgr.y (mgr.44107) 63 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.772466+0000 mgr.y (mgr.44107) 63 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.779034+0000 mgr.y (mgr.44107) 64 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.779034+0000 mgr.y (mgr.44107) 64 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.783503+0000 mgr.y (mgr.44107) 65 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.783503+0000 mgr.y (mgr.44107) 65 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.790117+0000 mgr.y (mgr.44107) 66 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.790117+0000 mgr.y (mgr.44107) 66 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.796867+0000 mgr.y (mgr.44107) 67 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.796867+0000 mgr.y (mgr.44107) 67 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:44:09.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.802513+0000 mgr.y (mgr.44107) 68 : cephadm [INF] Upgrade: Setting container_image for all node-exporter 2026-03-09T18:44:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.802513+0000 mgr.y (mgr.44107) 68 : cephadm [INF] Upgrade: Setting container_image for all node-exporter 2026-03-09T18:44:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.805071+0000 mgr.y (mgr.44107) 69 : cephadm [INF] Upgrade: Setting container_image for all prometheus 2026-03-09T18:44:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.805071+0000 mgr.y (mgr.44107) 69 : cephadm [INF] Upgrade: Setting container_image for all prometheus 2026-03-09T18:44:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.807645+0000 mgr.y (mgr.44107) 70 : cephadm [INF] Upgrade: Setting container_image for all alertmanager 2026-03-09T18:44:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.807645+0000 mgr.y (mgr.44107) 70 : cephadm [INF] Upgrade: Setting container_image for all alertmanager 2026-03-09T18:44:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.810237+0000 mgr.y (mgr.44107) 71 : cephadm [INF] Upgrade: Setting container_image for all grafana 2026-03-09T18:44:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.810237+0000 mgr.y (mgr.44107) 71 : cephadm [INF] Upgrade: Setting container_image for all grafana 2026-03-09T18:44:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.812649+0000 mgr.y (mgr.44107) 72 : cephadm [INF] Upgrade: Setting container_image for all loki 2026-03-09T18:44:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.812649+0000 mgr.y (mgr.44107) 72 : cephadm [INF] Upgrade: Setting container_image for all loki 2026-03-09T18:44:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.815232+0000 mgr.y (mgr.44107) 73 : cephadm [INF] Upgrade: Setting container_image for all promtail 2026-03-09T18:44:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.815232+0000 mgr.y (mgr.44107) 73 : cephadm [INF] Upgrade: Setting container_image for all promtail 2026-03-09T18:44:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.817073+0000 mgr.y (mgr.44107) 74 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:44:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.817073+0000 mgr.y (mgr.44107) 74 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:44:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.879173+0000 mgr.y (mgr.44107) 75 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:44:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cephadm 2026-03-09T18:44:07.879173+0000 mgr.y (mgr.44107) 75 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:44:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cluster 2026-03-09T18:44:08.054513+0000 mgr.y (mgr.44107) 76 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:09.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:09 vm00 bash[69512]: cluster 2026-03-09T18:44:08.054513+0000 mgr.y (mgr.44107) 76 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:09.630 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:44:09 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:44:09] "GET /metrics HTTP/1.1" 200 37590 "" "Prometheus/2.51.0" 2026-03-09T18:44:09.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.741363+0000 mgr.y (mgr.44107) 60 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:44:09.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.741363+0000 mgr.y (mgr.44107) 60 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:44:09.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.759247+0000 mgr.y (mgr.44107) 61 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:44:09.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.759247+0000 mgr.y (mgr.44107) 61 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.766162+0000 mgr.y (mgr.44107) 62 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.766162+0000 mgr.y (mgr.44107) 62 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.772466+0000 mgr.y (mgr.44107) 63 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.772466+0000 mgr.y (mgr.44107) 63 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.779034+0000 mgr.y (mgr.44107) 64 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.779034+0000 mgr.y (mgr.44107) 64 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.783503+0000 mgr.y (mgr.44107) 65 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.783503+0000 mgr.y (mgr.44107) 65 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.790117+0000 mgr.y (mgr.44107) 66 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.790117+0000 mgr.y (mgr.44107) 66 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.796867+0000 mgr.y (mgr.44107) 67 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.796867+0000 mgr.y (mgr.44107) 67 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.802513+0000 mgr.y (mgr.44107) 68 : cephadm [INF] Upgrade: Setting container_image for all node-exporter 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.802513+0000 mgr.y (mgr.44107) 68 : cephadm [INF] Upgrade: Setting container_image for all node-exporter 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.805071+0000 mgr.y (mgr.44107) 69 : cephadm [INF] Upgrade: Setting container_image for all prometheus 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.805071+0000 mgr.y (mgr.44107) 69 : cephadm [INF] Upgrade: Setting container_image for all prometheus 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.807645+0000 mgr.y (mgr.44107) 70 : cephadm [INF] Upgrade: Setting container_image for all alertmanager 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.807645+0000 mgr.y (mgr.44107) 70 : cephadm [INF] Upgrade: Setting container_image for all alertmanager 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.810237+0000 mgr.y (mgr.44107) 71 : cephadm [INF] Upgrade: Setting container_image for all grafana 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.810237+0000 mgr.y (mgr.44107) 71 : cephadm [INF] Upgrade: Setting container_image for all grafana 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.812649+0000 mgr.y (mgr.44107) 72 : cephadm [INF] Upgrade: Setting container_image for all loki 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.812649+0000 mgr.y (mgr.44107) 72 : cephadm [INF] Upgrade: Setting container_image for all loki 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.815232+0000 mgr.y (mgr.44107) 73 : cephadm [INF] Upgrade: Setting container_image for all promtail 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.815232+0000 mgr.y (mgr.44107) 73 : cephadm [INF] Upgrade: Setting container_image for all promtail 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.817073+0000 mgr.y (mgr.44107) 74 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.817073+0000 mgr.y (mgr.44107) 74 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.879173+0000 mgr.y (mgr.44107) 75 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cephadm 2026-03-09T18:44:07.879173+0000 mgr.y (mgr.44107) 75 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cluster 2026-03-09T18:44:08.054513+0000 mgr.y (mgr.44107) 76 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:09 vm08 bash[46122]: cluster 2026-03-09T18:44:08.054513+0000 mgr.y (mgr.44107) 76 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:11.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:11 vm00 bash[65531]: cluster 2026-03-09T18:44:10.055062+0000 mgr.y (mgr.44107) 77 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:11.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:11 vm00 bash[65531]: cluster 2026-03-09T18:44:10.055062+0000 mgr.y (mgr.44107) 77 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:11.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:11 vm00 bash[69512]: cluster 2026-03-09T18:44:10.055062+0000 mgr.y (mgr.44107) 77 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:11.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:11 vm00 bash[69512]: cluster 2026-03-09T18:44:10.055062+0000 mgr.y (mgr.44107) 77 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:11.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:11 vm08 bash[46122]: cluster 2026-03-09T18:44:10.055062+0000 mgr.y (mgr.44107) 77 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:11.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:11 vm08 bash[46122]: cluster 2026-03-09T18:44:10.055062+0000 mgr.y (mgr.44107) 77 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:13.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:13 vm00 bash[65531]: audit 2026-03-09T18:44:11.448510+0000 mgr.y (mgr.44107) 78 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:13.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:13 vm00 bash[65531]: audit 2026-03-09T18:44:11.448510+0000 mgr.y (mgr.44107) 78 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:13.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:13 vm00 bash[65531]: cluster 2026-03-09T18:44:12.055382+0000 mgr.y (mgr.44107) 79 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:13.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:13 vm00 bash[65531]: cluster 2026-03-09T18:44:12.055382+0000 mgr.y (mgr.44107) 79 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:13.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:13 vm00 bash[69512]: audit 2026-03-09T18:44:11.448510+0000 mgr.y (mgr.44107) 78 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:13.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:13 vm00 bash[69512]: audit 2026-03-09T18:44:11.448510+0000 mgr.y (mgr.44107) 78 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:13.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:13 vm00 bash[69512]: cluster 2026-03-09T18:44:12.055382+0000 mgr.y (mgr.44107) 79 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:13.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:13 vm00 bash[69512]: cluster 2026-03-09T18:44:12.055382+0000 mgr.y (mgr.44107) 79 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:13.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:13 vm08 bash[46122]: audit 2026-03-09T18:44:11.448510+0000 mgr.y (mgr.44107) 78 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:13.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:13 vm08 bash[46122]: audit 2026-03-09T18:44:11.448510+0000 mgr.y (mgr.44107) 78 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:13.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:13 vm08 bash[46122]: cluster 2026-03-09T18:44:12.055382+0000 mgr.y (mgr.44107) 79 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:13.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:13 vm08 bash[46122]: cluster 2026-03-09T18:44:12.055382+0000 mgr.y (mgr.44107) 79 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:15.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:15 vm00 bash[65531]: cluster 2026-03-09T18:44:14.055785+0000 mgr.y (mgr.44107) 80 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:15 vm00 bash[65531]: cluster 2026-03-09T18:44:14.055785+0000 mgr.y (mgr.44107) 80 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:15.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:15 vm00 bash[69512]: cluster 2026-03-09T18:44:14.055785+0000 mgr.y (mgr.44107) 80 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:15.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:15 vm00 bash[69512]: cluster 2026-03-09T18:44:14.055785+0000 mgr.y (mgr.44107) 80 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:15.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:15 vm08 bash[46122]: cluster 2026-03-09T18:44:14.055785+0000 mgr.y (mgr.44107) 80 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:15.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:15 vm08 bash[46122]: cluster 2026-03-09T18:44:14.055785+0000 mgr.y (mgr.44107) 80 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:17.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:17 vm00 bash[65531]: cluster 2026-03-09T18:44:16.056254+0000 mgr.y (mgr.44107) 81 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:17.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:17 vm00 bash[65531]: cluster 2026-03-09T18:44:16.056254+0000 mgr.y (mgr.44107) 81 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:17.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:17 vm00 bash[69512]: cluster 2026-03-09T18:44:16.056254+0000 mgr.y (mgr.44107) 81 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:17 vm00 bash[69512]: cluster 2026-03-09T18:44:16.056254+0000 mgr.y (mgr.44107) 81 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:17.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:17 vm08 bash[46122]: cluster 2026-03-09T18:44:16.056254+0000 mgr.y (mgr.44107) 81 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:17.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:17 vm08 bash[46122]: cluster 2026-03-09T18:44:16.056254+0000 mgr.y (mgr.44107) 81 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:18.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:18 vm08 bash[46122]: audit 2026-03-09T18:44:18.097851+0000 mon.c (mon.1) 120 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:44:18.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:18 vm08 bash[46122]: audit 2026-03-09T18:44:18.097851+0000 mon.c (mon.1) 120 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:44:18.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:18 vm00 bash[65531]: audit 2026-03-09T18:44:18.097851+0000 mon.c (mon.1) 120 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:44:18.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:18 vm00 bash[65531]: audit 2026-03-09T18:44:18.097851+0000 mon.c (mon.1) 120 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:44:18.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:18 vm00 bash[69512]: audit 2026-03-09T18:44:18.097851+0000 mon.c (mon.1) 120 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:44:18.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:18 vm00 bash[69512]: audit 2026-03-09T18:44:18.097851+0000 mon.c (mon.1) 120 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:44:19.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:19 vm08 bash[46122]: cluster 2026-03-09T18:44:18.056508+0000 mgr.y (mgr.44107) 82 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:19.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:19 vm08 bash[46122]: cluster 2026-03-09T18:44:18.056508+0000 mgr.y (mgr.44107) 82 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:19.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:19 vm00 bash[65531]: cluster 2026-03-09T18:44:18.056508+0000 mgr.y (mgr.44107) 82 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:19.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:19 vm00 bash[65531]: cluster 2026-03-09T18:44:18.056508+0000 mgr.y (mgr.44107) 82 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:19.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:19 vm00 bash[69512]: cluster 2026-03-09T18:44:18.056508+0000 mgr.y (mgr.44107) 82 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:19.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:19 vm00 bash[69512]: cluster 2026-03-09T18:44:18.056508+0000 mgr.y (mgr.44107) 82 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:19.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:44:19 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:44:19] "GET /metrics HTTP/1.1" 200 37588 "" "Prometheus/2.51.0" 2026-03-09T18:44:21.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:21 vm08 bash[46122]: cluster 2026-03-09T18:44:20.056966+0000 mgr.y (mgr.44107) 83 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:21.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:21 vm08 bash[46122]: cluster 2026-03-09T18:44:20.056966+0000 mgr.y (mgr.44107) 83 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:21.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:21 vm00 bash[65531]: cluster 2026-03-09T18:44:20.056966+0000 mgr.y (mgr.44107) 83 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:21.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:21 vm00 bash[65531]: cluster 2026-03-09T18:44:20.056966+0000 mgr.y (mgr.44107) 83 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:21.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:21 vm00 bash[69512]: cluster 2026-03-09T18:44:20.056966+0000 mgr.y (mgr.44107) 83 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:21.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:21 vm00 bash[69512]: cluster 2026-03-09T18:44:20.056966+0000 mgr.y (mgr.44107) 83 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:23.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:23 vm08 bash[46122]: audit 2026-03-09T18:44:21.457554+0000 mgr.y (mgr.44107) 84 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:23.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:23 vm08 bash[46122]: audit 2026-03-09T18:44:21.457554+0000 mgr.y (mgr.44107) 84 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:23.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:23 vm08 bash[46122]: cluster 2026-03-09T18:44:22.057222+0000 mgr.y (mgr.44107) 85 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:23.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:23 vm08 bash[46122]: cluster 2026-03-09T18:44:22.057222+0000 mgr.y (mgr.44107) 85 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:23.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:23 vm00 bash[65531]: audit 2026-03-09T18:44:21.457554+0000 mgr.y (mgr.44107) 84 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:23.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:23 vm00 bash[65531]: audit 2026-03-09T18:44:21.457554+0000 mgr.y (mgr.44107) 84 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:23.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:23 vm00 bash[65531]: cluster 2026-03-09T18:44:22.057222+0000 mgr.y (mgr.44107) 85 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:23.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:23 vm00 bash[65531]: cluster 2026-03-09T18:44:22.057222+0000 mgr.y (mgr.44107) 85 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:23.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:23 vm00 bash[69512]: audit 2026-03-09T18:44:21.457554+0000 mgr.y (mgr.44107) 84 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:23.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:23 vm00 bash[69512]: audit 2026-03-09T18:44:21.457554+0000 mgr.y (mgr.44107) 84 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:23.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:23 vm00 bash[69512]: cluster 2026-03-09T18:44:22.057222+0000 mgr.y (mgr.44107) 85 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:23.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:23 vm00 bash[69512]: cluster 2026-03-09T18:44:22.057222+0000 mgr.y (mgr.44107) 85 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:25.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:25 vm08 bash[46122]: cluster 2026-03-09T18:44:24.057475+0000 mgr.y (mgr.44107) 86 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:25.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:25 vm08 bash[46122]: cluster 2026-03-09T18:44:24.057475+0000 mgr.y (mgr.44107) 86 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:25.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:25 vm00 bash[65531]: cluster 2026-03-09T18:44:24.057475+0000 mgr.y (mgr.44107) 86 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:25.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:25 vm00 bash[65531]: cluster 2026-03-09T18:44:24.057475+0000 mgr.y (mgr.44107) 86 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:25.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:25 vm00 bash[69512]: cluster 2026-03-09T18:44:24.057475+0000 mgr.y (mgr.44107) 86 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:25.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:25 vm00 bash[69512]: cluster 2026-03-09T18:44:24.057475+0000 mgr.y (mgr.44107) 86 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:27.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:27 vm08 bash[46122]: cluster 2026-03-09T18:44:26.057931+0000 mgr.y (mgr.44107) 87 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:27.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:27 vm08 bash[46122]: cluster 2026-03-09T18:44:26.057931+0000 mgr.y (mgr.44107) 87 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:27.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:27 vm00 bash[65531]: cluster 2026-03-09T18:44:26.057931+0000 mgr.y (mgr.44107) 87 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:27.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:27 vm00 bash[65531]: cluster 2026-03-09T18:44:26.057931+0000 mgr.y (mgr.44107) 87 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:27.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:27 vm00 bash[69512]: cluster 2026-03-09T18:44:26.057931+0000 mgr.y (mgr.44107) 87 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:27.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:27 vm00 bash[69512]: cluster 2026-03-09T18:44:26.057931+0000 mgr.y (mgr.44107) 87 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:28.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:28 vm08 bash[46122]: cluster 2026-03-09T18:44:28.058178+0000 mgr.y (mgr.44107) 88 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:28.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:28 vm08 bash[46122]: cluster 2026-03-09T18:44:28.058178+0000 mgr.y (mgr.44107) 88 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:28.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:28 vm00 bash[65531]: cluster 2026-03-09T18:44:28.058178+0000 mgr.y (mgr.44107) 88 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:28.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:28 vm00 bash[65531]: cluster 2026-03-09T18:44:28.058178+0000 mgr.y (mgr.44107) 88 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:28.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:28 vm00 bash[69512]: cluster 2026-03-09T18:44:28.058178+0000 mgr.y (mgr.44107) 88 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:28.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:28 vm00 bash[69512]: cluster 2026-03-09T18:44:28.058178+0000 mgr.y (mgr.44107) 88 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:29.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:44:29 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:44:29] "GET /metrics HTTP/1.1" 200 37588 "" "Prometheus/2.51.0" 2026-03-09T18:44:30.327 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-09T18:44:30.802 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:44:30.802 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (14m) 36s ago 21m 14.5M - 0.25.0 c8568f914cd2 2a8a29ecee54 2026-03-09T18:44:30.802 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (2m) 36s ago 21m 65.0M - 10.4.0 c8b91775d855 5e0e30d27ab2 2026-03-09T18:44:30.802 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (2m) 36s ago 21m 43.5M - 3.5 e1d6a67b021e ff3da66cebe9 2026-03-09T18:44:30.802 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443,9283,8765 running (2m) 36s ago 24m 462M - 19.2.3-678-ge911bdeb 654f31e6858e e51f08afe84e 2026-03-09T18:44:30.802 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:8443,9283,8765 running (11m) 36s ago 25m 508M - 19.2.3-678-ge911bdeb 654f31e6858e 838266ced7b2 2026-03-09T18:44:30.802 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (46s) 36s ago 25m 37.0M 2048M 19.2.3-678-ge911bdeb 654f31e6858e eb9fca83668a 2026-03-09T18:44:30.802 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (96s) 36s ago 24m 37.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a343d2673f4 2026-03-09T18:44:30.802 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (60s) 36s ago 24m 36.0M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 5c50cbcbef6b 2026-03-09T18:44:30.802 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (14m) 36s ago 21m 7560k - 1.7.0 72c9c2088986 c2e3e3202fde 2026-03-09T18:44:30.802 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (14m) 36s ago 21m 7956k - 1.7.0 72c9c2088986 7a7d1ed8c801 2026-03-09T18:44:30.802 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (24m) 36s ago 24m 53.2M 4096M 17.2.0 e1d6a67b021e ab692a994bc3 2026-03-09T18:44:30.802 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (23m) 36s ago 23m 55.3M 4096M 17.2.0 e1d6a67b021e d8607e6df9d6 2026-03-09T18:44:30.802 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (23m) 36s ago 23m 49.3M 4096M 17.2.0 e1d6a67b021e 35e072ab4c22 2026-03-09T18:44:30.802 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (23m) 36s ago 23m 55.6M 4096M 17.2.0 e1d6a67b021e 306d680cc55b 2026-03-09T18:44:30.802 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (23m) 36s ago 23m 54.9M 4096M 17.2.0 e1d6a67b021e 781045b06a16 2026-03-09T18:44:30.802 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (22m) 36s ago 22m 53.9M 4096M 17.2.0 e1d6a67b021e 28b71e1b7c1b 2026-03-09T18:44:30.802 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (22m) 36s ago 22m 52.7M 4096M 17.2.0 e1d6a67b021e 80e1a58dd2f5 2026-03-09T18:44:30.802 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (22m) 36s ago 22m 52.2M 4096M 17.2.0 e1d6a67b021e 4f91765b51cf 2026-03-09T18:44:30.802 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (2m) 36s ago 21m 41.3M - 2.51.0 1d3b7f56885b 2dc1789f7bf0 2026-03-09T18:44:30.802 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (21m) 36s ago 21m 89.2M - 17.2.0 e1d6a67b021e 671fa80b7e00 2026-03-09T18:44:30.802 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (21m) 36s ago 21m 89.6M - 17.2.0 e1d6a67b021e 1fbcce983317 2026-03-09T18:44:30.855 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.mon | length == 1'"'"'' 2026-03-09T18:44:31.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:31 vm00 bash[65531]: cluster 2026-03-09T18:44:30.058784+0000 mgr.y (mgr.44107) 89 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:31.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:31 vm00 bash[65531]: cluster 2026-03-09T18:44:30.058784+0000 mgr.y (mgr.44107) 89 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:31.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:31 vm00 bash[65531]: audit 2026-03-09T18:44:30.242912+0000 mgr.y (mgr.44107) 90 : audit [DBG] from='client.44164 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:31.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:31 vm00 bash[65531]: audit 2026-03-09T18:44:30.242912+0000 mgr.y (mgr.44107) 90 : audit [DBG] from='client.44164 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:31.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:31 vm00 bash[69512]: cluster 2026-03-09T18:44:30.058784+0000 mgr.y (mgr.44107) 89 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:31.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:31 vm00 bash[69512]: cluster 2026-03-09T18:44:30.058784+0000 mgr.y (mgr.44107) 89 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:31.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:31 vm00 bash[69512]: audit 2026-03-09T18:44:30.242912+0000 mgr.y (mgr.44107) 90 : audit [DBG] from='client.44164 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:31.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:31 vm00 bash[69512]: audit 2026-03-09T18:44:30.242912+0000 mgr.y (mgr.44107) 90 : audit [DBG] from='client.44164 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:31.428 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:44:31.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:31 vm08 bash[46122]: cluster 2026-03-09T18:44:30.058784+0000 mgr.y (mgr.44107) 89 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:31.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:31 vm08 bash[46122]: cluster 2026-03-09T18:44:30.058784+0000 mgr.y (mgr.44107) 89 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:31.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:31 vm08 bash[46122]: audit 2026-03-09T18:44:30.242912+0000 mgr.y (mgr.44107) 90 : audit [DBG] from='client.44164 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:31.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:31 vm08 bash[46122]: audit 2026-03-09T18:44:30.242912+0000 mgr.y (mgr.44107) 90 : audit [DBG] from='client.44164 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:31.480 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.mon | keys'"'"' | grep $sha1' 2026-03-09T18:44:32.030 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)" 2026-03-09T18:44:32.092 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e '"'"'.up_to_date | length == 5'"'"'' 2026-03-09T18:44:32.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:32 vm00 bash[69512]: audit 2026-03-09T18:44:30.801553+0000 mgr.y (mgr.44107) 91 : audit [DBG] from='client.54165 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:32.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:32 vm00 bash[69512]: audit 2026-03-09T18:44:30.801553+0000 mgr.y (mgr.44107) 91 : audit [DBG] from='client.54165 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:32.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:32 vm00 bash[69512]: audit 2026-03-09T18:44:31.421092+0000 mon.a (mon.0) 156 : audit [DBG] from='client.? 192.168.123.100:0/1818924023' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:32.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:32 vm00 bash[69512]: audit 2026-03-09T18:44:31.421092+0000 mon.a (mon.0) 156 : audit [DBG] from='client.? 192.168.123.100:0/1818924023' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:32.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:32 vm00 bash[69512]: audit 2026-03-09T18:44:32.021603+0000 mon.a (mon.0) 157 : audit [DBG] from='client.? 192.168.123.100:0/3626841926' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:32.291 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:32 vm00 bash[69512]: audit 2026-03-09T18:44:32.021603+0000 mon.a (mon.0) 157 : audit [DBG] from='client.? 192.168.123.100:0/3626841926' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:32.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:32 vm00 bash[65531]: audit 2026-03-09T18:44:30.801553+0000 mgr.y (mgr.44107) 91 : audit [DBG] from='client.54165 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:32.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:32 vm00 bash[65531]: audit 2026-03-09T18:44:30.801553+0000 mgr.y (mgr.44107) 91 : audit [DBG] from='client.54165 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:32.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:32 vm00 bash[65531]: audit 2026-03-09T18:44:31.421092+0000 mon.a (mon.0) 156 : audit [DBG] from='client.? 192.168.123.100:0/1818924023' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:32.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:32 vm00 bash[65531]: audit 2026-03-09T18:44:31.421092+0000 mon.a (mon.0) 156 : audit [DBG] from='client.? 192.168.123.100:0/1818924023' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:32.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:32 vm00 bash[65531]: audit 2026-03-09T18:44:32.021603+0000 mon.a (mon.0) 157 : audit [DBG] from='client.? 192.168.123.100:0/3626841926' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:32.291 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:32 vm00 bash[65531]: audit 2026-03-09T18:44:32.021603+0000 mon.a (mon.0) 157 : audit [DBG] from='client.? 192.168.123.100:0/3626841926' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:32.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:32 vm08 bash[46122]: audit 2026-03-09T18:44:30.801553+0000 mgr.y (mgr.44107) 91 : audit [DBG] from='client.54165 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:32.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:32 vm08 bash[46122]: audit 2026-03-09T18:44:30.801553+0000 mgr.y (mgr.44107) 91 : audit [DBG] from='client.54165 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:32.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:32 vm08 bash[46122]: audit 2026-03-09T18:44:31.421092+0000 mon.a (mon.0) 156 : audit [DBG] from='client.? 192.168.123.100:0/1818924023' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:32.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:32 vm08 bash[46122]: audit 2026-03-09T18:44:31.421092+0000 mon.a (mon.0) 156 : audit [DBG] from='client.? 192.168.123.100:0/1818924023' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:32.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:32 vm08 bash[46122]: audit 2026-03-09T18:44:32.021603+0000 mon.a (mon.0) 157 : audit [DBG] from='client.? 192.168.123.100:0/3626841926' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:32.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:32 vm08 bash[46122]: audit 2026-03-09T18:44:32.021603+0000 mon.a (mon.0) 157 : audit [DBG] from='client.? 192.168.123.100:0/3626841926' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:33.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:33 vm00 bash[69512]: audit 2026-03-09T18:44:31.466219+0000 mgr.y (mgr.44107) 92 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:33.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:33 vm00 bash[69512]: audit 2026-03-09T18:44:31.466219+0000 mgr.y (mgr.44107) 92 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:33.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:33 vm00 bash[69512]: cluster 2026-03-09T18:44:32.059142+0000 mgr.y (mgr.44107) 93 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:33.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:33 vm00 bash[69512]: cluster 2026-03-09T18:44:32.059142+0000 mgr.y (mgr.44107) 93 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:33.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:33 vm00 bash[69512]: audit 2026-03-09T18:44:33.098393+0000 mon.c (mon.1) 121 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:44:33.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:33 vm00 bash[69512]: audit 2026-03-09T18:44:33.098393+0000 mon.c (mon.1) 121 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:44:33.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:33 vm00 bash[65531]: audit 2026-03-09T18:44:31.466219+0000 mgr.y (mgr.44107) 92 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:33.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:33 vm00 bash[65531]: audit 2026-03-09T18:44:31.466219+0000 mgr.y (mgr.44107) 92 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:33.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:33 vm00 bash[65531]: cluster 2026-03-09T18:44:32.059142+0000 mgr.y (mgr.44107) 93 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:33.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:33 vm00 bash[65531]: cluster 2026-03-09T18:44:32.059142+0000 mgr.y (mgr.44107) 93 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:33.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:33 vm00 bash[65531]: audit 2026-03-09T18:44:33.098393+0000 mon.c (mon.1) 121 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:44:33.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:33 vm00 bash[65531]: audit 2026-03-09T18:44:33.098393+0000 mon.c (mon.1) 121 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:44:33.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:33 vm08 bash[46122]: audit 2026-03-09T18:44:31.466219+0000 mgr.y (mgr.44107) 92 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:33.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:33 vm08 bash[46122]: audit 2026-03-09T18:44:31.466219+0000 mgr.y (mgr.44107) 92 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:33.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:33 vm08 bash[46122]: cluster 2026-03-09T18:44:32.059142+0000 mgr.y (mgr.44107) 93 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:33.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:33 vm08 bash[46122]: cluster 2026-03-09T18:44:32.059142+0000 mgr.y (mgr.44107) 93 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:33.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:33 vm08 bash[46122]: audit 2026-03-09T18:44:33.098393+0000 mon.c (mon.1) 121 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:44:33.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:33 vm08 bash[46122]: audit 2026-03-09T18:44:33.098393+0000 mon.c (mon.1) 121 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:44:34.221 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:44:34.267 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade status' 2026-03-09T18:44:34.267 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:34 vm00 bash[69512]: audit 2026-03-09T18:44:32.633869+0000 mgr.y (mgr.44107) 94 : audit [DBG] from='client.44179 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:34.267 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:34 vm00 bash[69512]: audit 2026-03-09T18:44:32.633869+0000 mgr.y (mgr.44107) 94 : audit [DBG] from='client.44179 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:34.267 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:34 vm00 bash[65531]: audit 2026-03-09T18:44:32.633869+0000 mgr.y (mgr.44107) 94 : audit [DBG] from='client.44179 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:34.267 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:34 vm00 bash[65531]: audit 2026-03-09T18:44:32.633869+0000 mgr.y (mgr.44107) 94 : audit [DBG] from='client.44179 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:34.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:34 vm08 bash[46122]: audit 2026-03-09T18:44:32.633869+0000 mgr.y (mgr.44107) 94 : audit [DBG] from='client.44179 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:34.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:34 vm08 bash[46122]: audit 2026-03-09T18:44:32.633869+0000 mgr.y (mgr.44107) 94 : audit [DBG] from='client.44179 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:34.741 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:44:34.741 INFO:teuthology.orchestra.run.vm00.stdout: "target_image": null, 2026-03-09T18:44:34.741 INFO:teuthology.orchestra.run.vm00.stdout: "in_progress": false, 2026-03-09T18:44:34.741 INFO:teuthology.orchestra.run.vm00.stdout: "which": "", 2026-03-09T18:44:34.741 INFO:teuthology.orchestra.run.vm00.stdout: "services_complete": [], 2026-03-09T18:44:34.741 INFO:teuthology.orchestra.run.vm00.stdout: "progress": null, 2026-03-09T18:44:34.742 INFO:teuthology.orchestra.run.vm00.stdout: "message": "", 2026-03-09T18:44:34.742 INFO:teuthology.orchestra.run.vm00.stdout: "is_paused": false 2026-03-09T18:44:34.742 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:44:34.800 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph health detail' 2026-03-09T18:44:35.291 INFO:teuthology.orchestra.run.vm00.stdout:HEALTH_OK 2026-03-09T18:44:35.303 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:35 vm00 bash[65531]: cluster 2026-03-09T18:44:34.059419+0000 mgr.y (mgr.44107) 95 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:35.303 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:35 vm00 bash[65531]: cluster 2026-03-09T18:44:34.059419+0000 mgr.y (mgr.44107) 95 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:35.303 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:35 vm00 bash[69512]: cluster 2026-03-09T18:44:34.059419+0000 mgr.y (mgr.44107) 95 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:35.303 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:35 vm00 bash[69512]: cluster 2026-03-09T18:44:34.059419+0000 mgr.y (mgr.44107) 95 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:35.347 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types osd --limit 2' 2026-03-09T18:44:35.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:35 vm08 bash[46122]: cluster 2026-03-09T18:44:34.059419+0000 mgr.y (mgr.44107) 95 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:35.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:35 vm08 bash[46122]: cluster 2026-03-09T18:44:34.059419+0000 mgr.y (mgr.44107) 95 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:36.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:36 vm08 bash[46122]: audit 2026-03-09T18:44:34.745236+0000 mgr.y (mgr.44107) 96 : audit [DBG] from='client.54183 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:36.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:36 vm08 bash[46122]: audit 2026-03-09T18:44:34.745236+0000 mgr.y (mgr.44107) 96 : audit [DBG] from='client.54183 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:36.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:36 vm08 bash[46122]: audit 2026-03-09T18:44:35.290971+0000 mon.b (mon.2) 13 : audit [DBG] from='client.? 192.168.123.100:0/3684979884' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:44:36.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:36 vm08 bash[46122]: audit 2026-03-09T18:44:35.290971+0000 mon.b (mon.2) 13 : audit [DBG] from='client.? 192.168.123.100:0/3684979884' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:44:36.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:36 vm00 bash[65531]: audit 2026-03-09T18:44:34.745236+0000 mgr.y (mgr.44107) 96 : audit [DBG] from='client.54183 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:36 vm00 bash[65531]: audit 2026-03-09T18:44:34.745236+0000 mgr.y (mgr.44107) 96 : audit [DBG] from='client.54183 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:36 vm00 bash[65531]: audit 2026-03-09T18:44:35.290971+0000 mon.b (mon.2) 13 : audit [DBG] from='client.? 192.168.123.100:0/3684979884' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:44:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:36 vm00 bash[65531]: audit 2026-03-09T18:44:35.290971+0000 mon.b (mon.2) 13 : audit [DBG] from='client.? 192.168.123.100:0/3684979884' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:44:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:36 vm00 bash[69512]: audit 2026-03-09T18:44:34.745236+0000 mgr.y (mgr.44107) 96 : audit [DBG] from='client.54183 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:36 vm00 bash[69512]: audit 2026-03-09T18:44:34.745236+0000 mgr.y (mgr.44107) 96 : audit [DBG] from='client.54183 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:36 vm00 bash[69512]: audit 2026-03-09T18:44:35.290971+0000 mon.b (mon.2) 13 : audit [DBG] from='client.? 192.168.123.100:0/3684979884' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:44:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:36 vm00 bash[69512]: audit 2026-03-09T18:44:35.290971+0000 mon.b (mon.2) 13 : audit [DBG] from='client.? 192.168.123.100:0/3684979884' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:44:37.258 INFO:teuthology.orchestra.run.vm00.stdout:Initiating upgrade to quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:44:37.335 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:37 vm00 bash[69512]: audit 2026-03-09T18:44:35.826188+0000 mgr.y (mgr.44107) 97 : audit [DBG] from='client.44197 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "osd", "limit": 2, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:37.335 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:37 vm00 bash[69512]: audit 2026-03-09T18:44:35.826188+0000 mgr.y (mgr.44107) 97 : audit [DBG] from='client.44197 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "osd", "limit": 2, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:37.335 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:37 vm00 bash[69512]: cluster 2026-03-09T18:44:36.060055+0000 mgr.y (mgr.44107) 98 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:37.335 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:37 vm00 bash[69512]: cluster 2026-03-09T18:44:36.060055+0000 mgr.y (mgr.44107) 98 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:37.335 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:37 vm00 bash[65531]: audit 2026-03-09T18:44:35.826188+0000 mgr.y (mgr.44107) 97 : audit [DBG] from='client.44197 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "osd", "limit": 2, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:37.335 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:37 vm00 bash[65531]: audit 2026-03-09T18:44:35.826188+0000 mgr.y (mgr.44107) 97 : audit [DBG] from='client.44197 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "osd", "limit": 2, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:37.335 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:37 vm00 bash[65531]: cluster 2026-03-09T18:44:36.060055+0000 mgr.y (mgr.44107) 98 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:37.335 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:37 vm00 bash[65531]: cluster 2026-03-09T18:44:36.060055+0000 mgr.y (mgr.44107) 98 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:37.336 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'while ceph orch upgrade status | jq '"'"'.in_progress'"'"' | grep true && ! ceph orch upgrade status | jq '"'"'.message'"'"' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done' 2026-03-09T18:44:37.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:37 vm08 bash[46122]: audit 2026-03-09T18:44:35.826188+0000 mgr.y (mgr.44107) 97 : audit [DBG] from='client.44197 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "osd", "limit": 2, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:37.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:37 vm08 bash[46122]: audit 2026-03-09T18:44:35.826188+0000 mgr.y (mgr.44107) 97 : audit [DBG] from='client.44197 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "osd", "limit": 2, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:37.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:37 vm08 bash[46122]: cluster 2026-03-09T18:44:36.060055+0000 mgr.y (mgr.44107) 98 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:37.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:37 vm08 bash[46122]: cluster 2026-03-09T18:44:36.060055+0000 mgr.y (mgr.44107) 98 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:37.902 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:44:38.340 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:44:38.340 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (14m) 44s ago 21m 14.5M - 0.25.0 c8568f914cd2 2a8a29ecee54 2026-03-09T18:44:38.340 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (2m) 44s ago 21m 65.0M - 10.4.0 c8b91775d855 5e0e30d27ab2 2026-03-09T18:44:38.340 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (2m) 44s ago 21m 43.5M - 3.5 e1d6a67b021e ff3da66cebe9 2026-03-09T18:44:38.340 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443,9283,8765 running (2m) 44s ago 24m 462M - 19.2.3-678-ge911bdeb 654f31e6858e e51f08afe84e 2026-03-09T18:44:38.340 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:8443,9283,8765 running (12m) 44s ago 25m 508M - 19.2.3-678-ge911bdeb 654f31e6858e 838266ced7b2 2026-03-09T18:44:38.340 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (54s) 44s ago 25m 37.0M 2048M 19.2.3-678-ge911bdeb 654f31e6858e eb9fca83668a 2026-03-09T18:44:38.340 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (103s) 44s ago 24m 37.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a343d2673f4 2026-03-09T18:44:38.340 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (68s) 44s ago 24m 36.0M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 5c50cbcbef6b 2026-03-09T18:44:38.340 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (14m) 44s ago 22m 7560k - 1.7.0 72c9c2088986 c2e3e3202fde 2026-03-09T18:44:38.340 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (14m) 44s ago 22m 7956k - 1.7.0 72c9c2088986 7a7d1ed8c801 2026-03-09T18:44:38.340 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (24m) 44s ago 24m 53.2M 4096M 17.2.0 e1d6a67b021e ab692a994bc3 2026-03-09T18:44:38.340 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (24m) 44s ago 24m 55.3M 4096M 17.2.0 e1d6a67b021e d8607e6df9d6 2026-03-09T18:44:38.340 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (23m) 44s ago 23m 49.3M 4096M 17.2.0 e1d6a67b021e 35e072ab4c22 2026-03-09T18:44:38.340 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (23m) 44s ago 23m 55.6M 4096M 17.2.0 e1d6a67b021e 306d680cc55b 2026-03-09T18:44:38.340 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (23m) 44s ago 23m 54.9M 4096M 17.2.0 e1d6a67b021e 781045b06a16 2026-03-09T18:44:38.340 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (22m) 44s ago 23m 53.9M 4096M 17.2.0 e1d6a67b021e 28b71e1b7c1b 2026-03-09T18:44:38.340 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (22m) 44s ago 22m 52.7M 4096M 17.2.0 e1d6a67b021e 80e1a58dd2f5 2026-03-09T18:44:38.340 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (22m) 44s ago 22m 52.2M 4096M 17.2.0 e1d6a67b021e 4f91765b51cf 2026-03-09T18:44:38.340 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (2m) 44s ago 21m 41.3M - 2.51.0 1d3b7f56885b 2dc1789f7bf0 2026-03-09T18:44:38.340 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (21m) 44s ago 21m 89.2M - 17.2.0 e1d6a67b021e 671fa80b7e00 2026-03-09T18:44:38.340 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (21m) 44s ago 21m 89.6M - 17.2.0 e1d6a67b021e 1fbcce983317 2026-03-09T18:44:38.592 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:44:38.592 INFO:teuthology.orchestra.run.vm00.stdout: "mon": { 2026-03-09T18:44:38.592 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-09T18:44:38.592 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:44:38.592 INFO:teuthology.orchestra.run.vm00.stdout: "mgr": { 2026-03-09T18:44:38.592 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:44:38.592 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:44:38.592 INFO:teuthology.orchestra.run.vm00.stdout: "osd": { 2026-03-09T18:44:38.592 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-09T18:44:38.592 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:44:38.592 INFO:teuthology.orchestra.run.vm00.stdout: "rgw": { 2026-03-09T18:44:38.592 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-09T18:44:38.592 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:44:38.592 INFO:teuthology.orchestra.run.vm00.stdout: "overall": { 2026-03-09T18:44:38.592 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 10, 2026-03-09T18:44:38.592 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 5 2026-03-09T18:44:38.592 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:44:38.592 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:44:38.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:38 vm00 bash[65531]: cephadm 2026-03-09T18:44:37.250223+0000 mgr.y (mgr.44107) 99 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:44:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:38 vm00 bash[65531]: cephadm 2026-03-09T18:44:37.250223+0000 mgr.y (mgr.44107) 99 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:44:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:38 vm00 bash[65531]: audit 2026-03-09T18:44:37.258212+0000 mon.a (mon.0) 158 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:38 vm00 bash[65531]: audit 2026-03-09T18:44:37.258212+0000 mon.a (mon.0) 158 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:38 vm00 bash[65531]: audit 2026-03-09T18:44:37.265585+0000 mon.c (mon.1) 122 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:38 vm00 bash[65531]: audit 2026-03-09T18:44:37.265585+0000 mon.c (mon.1) 122 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:38 vm00 bash[65531]: audit 2026-03-09T18:44:37.267934+0000 mon.c (mon.1) 123 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:38 vm00 bash[65531]: audit 2026-03-09T18:44:37.267934+0000 mon.c (mon.1) 123 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:38 vm00 bash[65531]: audit 2026-03-09T18:44:37.268772+0000 mon.c (mon.1) 124 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:38 vm00 bash[65531]: audit 2026-03-09T18:44:37.268772+0000 mon.c (mon.1) 124 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:38 vm00 bash[65531]: audit 2026-03-09T18:44:37.295276+0000 mon.a (mon.0) 159 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:38 vm00 bash[65531]: audit 2026-03-09T18:44:37.295276+0000 mon.a (mon.0) 159 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:38 vm00 bash[65531]: cephadm 2026-03-09T18:44:37.349681+0000 mgr.y (mgr.44107) 100 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:44:38.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:38 vm00 bash[65531]: cephadm 2026-03-09T18:44:37.349681+0000 mgr.y (mgr.44107) 100 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:44:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:38 vm00 bash[69512]: cephadm 2026-03-09T18:44:37.250223+0000 mgr.y (mgr.44107) 99 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:44:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:38 vm00 bash[69512]: cephadm 2026-03-09T18:44:37.250223+0000 mgr.y (mgr.44107) 99 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:44:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:38 vm00 bash[69512]: audit 2026-03-09T18:44:37.258212+0000 mon.a (mon.0) 158 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:38 vm00 bash[69512]: audit 2026-03-09T18:44:37.258212+0000 mon.a (mon.0) 158 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:38 vm00 bash[69512]: audit 2026-03-09T18:44:37.265585+0000 mon.c (mon.1) 122 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:38 vm00 bash[69512]: audit 2026-03-09T18:44:37.265585+0000 mon.c (mon.1) 122 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:38 vm00 bash[69512]: audit 2026-03-09T18:44:37.267934+0000 mon.c (mon.1) 123 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:38 vm00 bash[69512]: audit 2026-03-09T18:44:37.267934+0000 mon.c (mon.1) 123 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:38 vm00 bash[69512]: audit 2026-03-09T18:44:37.268772+0000 mon.c (mon.1) 124 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:38 vm00 bash[69512]: audit 2026-03-09T18:44:37.268772+0000 mon.c (mon.1) 124 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:38 vm00 bash[69512]: audit 2026-03-09T18:44:37.295276+0000 mon.a (mon.0) 159 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:38 vm00 bash[69512]: audit 2026-03-09T18:44:37.295276+0000 mon.a (mon.0) 159 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:38 vm00 bash[69512]: cephadm 2026-03-09T18:44:37.349681+0000 mgr.y (mgr.44107) 100 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:44:38.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:38 vm00 bash[69512]: cephadm 2026-03-09T18:44:37.349681+0000 mgr.y (mgr.44107) 100 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:44:38.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:38 vm08 bash[46122]: cephadm 2026-03-09T18:44:37.250223+0000 mgr.y (mgr.44107) 99 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:44:38.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:38 vm08 bash[46122]: cephadm 2026-03-09T18:44:37.250223+0000 mgr.y (mgr.44107) 99 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:44:38.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:38 vm08 bash[46122]: audit 2026-03-09T18:44:37.258212+0000 mon.a (mon.0) 158 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:38.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:38 vm08 bash[46122]: audit 2026-03-09T18:44:37.258212+0000 mon.a (mon.0) 158 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:38.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:38 vm08 bash[46122]: audit 2026-03-09T18:44:37.265585+0000 mon.c (mon.1) 122 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:38.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:38 vm08 bash[46122]: audit 2026-03-09T18:44:37.265585+0000 mon.c (mon.1) 122 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:38.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:38 vm08 bash[46122]: audit 2026-03-09T18:44:37.267934+0000 mon.c (mon.1) 123 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:38.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:38 vm08 bash[46122]: audit 2026-03-09T18:44:37.267934+0000 mon.c (mon.1) 123 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:38.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:38 vm08 bash[46122]: audit 2026-03-09T18:44:37.268772+0000 mon.c (mon.1) 124 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:38.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:38 vm08 bash[46122]: audit 2026-03-09T18:44:37.268772+0000 mon.c (mon.1) 124 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:38.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:38 vm08 bash[46122]: audit 2026-03-09T18:44:37.295276+0000 mon.a (mon.0) 159 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:38.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:38 vm08 bash[46122]: audit 2026-03-09T18:44:37.295276+0000 mon.a (mon.0) 159 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:38.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:38 vm08 bash[46122]: cephadm 2026-03-09T18:44:37.349681+0000 mgr.y (mgr.44107) 100 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:44:38.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:38 vm08 bash[46122]: cephadm 2026-03-09T18:44:37.349681+0000 mgr.y (mgr.44107) 100 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:44:38.875 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:44:38.875 INFO:teuthology.orchestra.run.vm00.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-09T18:44:38.875 INFO:teuthology.orchestra.run.vm00.stdout: "in_progress": true, 2026-03-09T18:44:38.875 INFO:teuthology.orchestra.run.vm00.stdout: "which": "Upgrading daemons of type(s) osd. Upgrade limited to 2 daemons (2 remaining).", 2026-03-09T18:44:38.875 INFO:teuthology.orchestra.run.vm00.stdout: "services_complete": [], 2026-03-09T18:44:38.875 INFO:teuthology.orchestra.run.vm00.stdout: "progress": "", 2026-03-09T18:44:38.875 INFO:teuthology.orchestra.run.vm00.stdout: "message": "Doing first pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df image", 2026-03-09T18:44:38.875 INFO:teuthology.orchestra.run.vm00.stdout: "is_paused": false 2026-03-09T18:44:38.875 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:44:39.529 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:39 vm00 bash[69512]: audit 2026-03-09T18:44:37.894268+0000 mgr.y (mgr.44107) 101 : audit [DBG] from='client.54201 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:39.529 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:39 vm00 bash[69512]: audit 2026-03-09T18:44:37.894268+0000 mgr.y (mgr.44107) 101 : audit [DBG] from='client.54201 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:39.529 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:39 vm00 bash[69512]: cluster 2026-03-09T18:44:38.060408+0000 mgr.y (mgr.44107) 102 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:39.529 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:39 vm00 bash[69512]: cluster 2026-03-09T18:44:38.060408+0000 mgr.y (mgr.44107) 102 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:39.529 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:39 vm00 bash[69512]: audit 2026-03-09T18:44:38.123451+0000 mgr.y (mgr.44107) 103 : audit [DBG] from='client.34217 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:39.529 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:39 vm00 bash[69512]: audit 2026-03-09T18:44:38.123451+0000 mgr.y (mgr.44107) 103 : audit [DBG] from='client.34217 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:39.529 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:39 vm00 bash[69512]: audit 2026-03-09T18:44:38.339190+0000 mgr.y (mgr.44107) 104 : audit [DBG] from='client.54213 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:39.529 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:39 vm00 bash[69512]: audit 2026-03-09T18:44:38.339190+0000 mgr.y (mgr.44107) 104 : audit [DBG] from='client.54213 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:39.529 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:39 vm00 bash[69512]: audit 2026-03-09T18:44:38.595814+0000 mon.a (mon.0) 160 : audit [DBG] from='client.? 192.168.123.100:0/2965345853' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:39.529 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:39 vm00 bash[69512]: audit 2026-03-09T18:44:38.595814+0000 mon.a (mon.0) 160 : audit [DBG] from='client.? 192.168.123.100:0/2965345853' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:39.529 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:39 vm00 bash[69512]: audit 2026-03-09T18:44:38.908436+0000 mon.a (mon.0) 161 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:39.529 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:39 vm00 bash[69512]: audit 2026-03-09T18:44:38.908436+0000 mon.a (mon.0) 161 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:39.529 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:39 vm00 bash[69512]: audit 2026-03-09T18:44:38.911421+0000 mon.c (mon.1) 125 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:39 vm00 bash[69512]: audit 2026-03-09T18:44:38.911421+0000 mon.c (mon.1) 125 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:39 vm00 bash[69512]: audit 2026-03-09T18:44:38.913085+0000 mon.c (mon.1) 126 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:39 vm00 bash[69512]: audit 2026-03-09T18:44:38.913085+0000 mon.c (mon.1) 126 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:39 vm00 bash[69512]: audit 2026-03-09T18:44:38.919010+0000 mon.a (mon.0) 162 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:39 vm00 bash[69512]: audit 2026-03-09T18:44:38.919010+0000 mon.a (mon.0) 162 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:39 vm00 bash[69512]: audit 2026-03-09T18:44:38.921932+0000 mon.c (mon.1) 127 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:39 vm00 bash[69512]: audit 2026-03-09T18:44:38.921932+0000 mon.c (mon.1) 127 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:39 vm00 bash[69512]: audit 2026-03-09T18:44:38.928290+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:39 vm00 bash[69512]: audit 2026-03-09T18:44:38.928290+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:39 vm00 bash[69512]: audit 2026-03-09T18:44:38.932264+0000 mon.c (mon.1) 128 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:39 vm00 bash[69512]: audit 2026-03-09T18:44:38.932264+0000 mon.c (mon.1) 128 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:39 vm00 bash[69512]: audit 2026-03-09T18:44:38.937801+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:39 vm00 bash[69512]: audit 2026-03-09T18:44:38.937801+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:39 vm00 bash[69512]: audit 2026-03-09T18:44:38.941464+0000 mon.c (mon.1) 129 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:39 vm00 bash[69512]: audit 2026-03-09T18:44:38.941464+0000 mon.c (mon.1) 129 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:39 vm00 bash[65531]: audit 2026-03-09T18:44:37.894268+0000 mgr.y (mgr.44107) 101 : audit [DBG] from='client.54201 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:39 vm00 bash[65531]: audit 2026-03-09T18:44:37.894268+0000 mgr.y (mgr.44107) 101 : audit [DBG] from='client.54201 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:39 vm00 bash[65531]: cluster 2026-03-09T18:44:38.060408+0000 mgr.y (mgr.44107) 102 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:39 vm00 bash[65531]: cluster 2026-03-09T18:44:38.060408+0000 mgr.y (mgr.44107) 102 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:39 vm00 bash[65531]: audit 2026-03-09T18:44:38.123451+0000 mgr.y (mgr.44107) 103 : audit [DBG] from='client.34217 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:39 vm00 bash[65531]: audit 2026-03-09T18:44:38.123451+0000 mgr.y (mgr.44107) 103 : audit [DBG] from='client.34217 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:39 vm00 bash[65531]: audit 2026-03-09T18:44:38.339190+0000 mgr.y (mgr.44107) 104 : audit [DBG] from='client.54213 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:39 vm00 bash[65531]: audit 2026-03-09T18:44:38.339190+0000 mgr.y (mgr.44107) 104 : audit [DBG] from='client.54213 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:39 vm00 bash[65531]: audit 2026-03-09T18:44:38.595814+0000 mon.a (mon.0) 160 : audit [DBG] from='client.? 192.168.123.100:0/2965345853' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:39 vm00 bash[65531]: audit 2026-03-09T18:44:38.595814+0000 mon.a (mon.0) 160 : audit [DBG] from='client.? 192.168.123.100:0/2965345853' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:39 vm00 bash[65531]: audit 2026-03-09T18:44:38.908436+0000 mon.a (mon.0) 161 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:39 vm00 bash[65531]: audit 2026-03-09T18:44:38.908436+0000 mon.a (mon.0) 161 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:39 vm00 bash[65531]: audit 2026-03-09T18:44:38.911421+0000 mon.c (mon.1) 125 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:39 vm00 bash[65531]: audit 2026-03-09T18:44:38.911421+0000 mon.c (mon.1) 125 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:39 vm00 bash[65531]: audit 2026-03-09T18:44:38.913085+0000 mon.c (mon.1) 126 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:39.530 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:39 vm00 bash[65531]: audit 2026-03-09T18:44:38.913085+0000 mon.c (mon.1) 126 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:39.531 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:39 vm00 bash[65531]: audit 2026-03-09T18:44:38.919010+0000 mon.a (mon.0) 162 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:39.531 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:39 vm00 bash[65531]: audit 2026-03-09T18:44:38.919010+0000 mon.a (mon.0) 162 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:39.531 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:39 vm00 bash[65531]: audit 2026-03-09T18:44:38.921932+0000 mon.c (mon.1) 127 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:39.531 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:39 vm00 bash[65531]: audit 2026-03-09T18:44:38.921932+0000 mon.c (mon.1) 127 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:39.531 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:39 vm00 bash[65531]: audit 2026-03-09T18:44:38.928290+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:39.531 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:39 vm00 bash[65531]: audit 2026-03-09T18:44:38.928290+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:39.531 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:39 vm00 bash[65531]: audit 2026-03-09T18:44:38.932264+0000 mon.c (mon.1) 128 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:39.531 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:39 vm00 bash[65531]: audit 2026-03-09T18:44:38.932264+0000 mon.c (mon.1) 128 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:39.531 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:39 vm00 bash[65531]: audit 2026-03-09T18:44:38.937801+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:39.531 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:39 vm00 bash[65531]: audit 2026-03-09T18:44:38.937801+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:39.531 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:39 vm00 bash[65531]: audit 2026-03-09T18:44:38.941464+0000 mon.c (mon.1) 129 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T18:44:39.531 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:39 vm00 bash[65531]: audit 2026-03-09T18:44:38.941464+0000 mon.c (mon.1) 129 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T18:44:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:39 vm08 bash[46122]: audit 2026-03-09T18:44:37.894268+0000 mgr.y (mgr.44107) 101 : audit [DBG] from='client.54201 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:39 vm08 bash[46122]: audit 2026-03-09T18:44:37.894268+0000 mgr.y (mgr.44107) 101 : audit [DBG] from='client.54201 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:39 vm08 bash[46122]: cluster 2026-03-09T18:44:38.060408+0000 mgr.y (mgr.44107) 102 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:39 vm08 bash[46122]: cluster 2026-03-09T18:44:38.060408+0000 mgr.y (mgr.44107) 102 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:44:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:39 vm08 bash[46122]: audit 2026-03-09T18:44:38.123451+0000 mgr.y (mgr.44107) 103 : audit [DBG] from='client.34217 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:39 vm08 bash[46122]: audit 2026-03-09T18:44:38.123451+0000 mgr.y (mgr.44107) 103 : audit [DBG] from='client.34217 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:39 vm08 bash[46122]: audit 2026-03-09T18:44:38.339190+0000 mgr.y (mgr.44107) 104 : audit [DBG] from='client.54213 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:39 vm08 bash[46122]: audit 2026-03-09T18:44:38.339190+0000 mgr.y (mgr.44107) 104 : audit [DBG] from='client.54213 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:39 vm08 bash[46122]: audit 2026-03-09T18:44:38.595814+0000 mon.a (mon.0) 160 : audit [DBG] from='client.? 192.168.123.100:0/2965345853' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:39 vm08 bash[46122]: audit 2026-03-09T18:44:38.595814+0000 mon.a (mon.0) 160 : audit [DBG] from='client.? 192.168.123.100:0/2965345853' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:39 vm08 bash[46122]: audit 2026-03-09T18:44:38.908436+0000 mon.a (mon.0) 161 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:39 vm08 bash[46122]: audit 2026-03-09T18:44:38.908436+0000 mon.a (mon.0) 161 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:39 vm08 bash[46122]: audit 2026-03-09T18:44:38.911421+0000 mon.c (mon.1) 125 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:39 vm08 bash[46122]: audit 2026-03-09T18:44:38.911421+0000 mon.c (mon.1) 125 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:39 vm08 bash[46122]: audit 2026-03-09T18:44:38.913085+0000 mon.c (mon.1) 126 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:39 vm08 bash[46122]: audit 2026-03-09T18:44:38.913085+0000 mon.c (mon.1) 126 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:39 vm08 bash[46122]: audit 2026-03-09T18:44:38.919010+0000 mon.a (mon.0) 162 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:39 vm08 bash[46122]: audit 2026-03-09T18:44:38.919010+0000 mon.a (mon.0) 162 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:39 vm08 bash[46122]: audit 2026-03-09T18:44:38.921932+0000 mon.c (mon.1) 127 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:39 vm08 bash[46122]: audit 2026-03-09T18:44:38.921932+0000 mon.c (mon.1) 127 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:39 vm08 bash[46122]: audit 2026-03-09T18:44:38.928290+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:39.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:39 vm08 bash[46122]: audit 2026-03-09T18:44:38.928290+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:39.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:39 vm08 bash[46122]: audit 2026-03-09T18:44:38.932264+0000 mon.c (mon.1) 128 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:39.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:39 vm08 bash[46122]: audit 2026-03-09T18:44:38.932264+0000 mon.c (mon.1) 128 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:39.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:39 vm08 bash[46122]: audit 2026-03-09T18:44:38.937801+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:39.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:39 vm08 bash[46122]: audit 2026-03-09T18:44:38.937801+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:39.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:39 vm08 bash[46122]: audit 2026-03-09T18:44:38.941464+0000 mon.c (mon.1) 129 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T18:44:39.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:39 vm08 bash[46122]: audit 2026-03-09T18:44:38.941464+0000 mon.c (mon.1) 129 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T18:44:39.779 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:44:39 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:44:39] "GET /metrics HTTP/1.1" 200 37587 "" "Prometheus/2.51.0" 2026-03-09T18:44:40.355 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:40 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:40.356 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:40 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:40.356 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:44:40 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:40.356 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:44:40 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:40.356 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:44:40 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:40.356 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:44:40 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:40.356 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:44:40 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:40.356 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:44:40 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:40.356 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:40 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:40.356 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:40 vm00 systemd[1]: Stopping Ceph osd.3 for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:44:40.623 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:40 vm00 bash[34680]: debug 2026-03-09T18:44:40.396+0000 7faa5e8a2700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.3 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T18:44:40.623 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:40 vm00 bash[34680]: debug 2026-03-09T18:44:40.396+0000 7faa5e8a2700 -1 osd.3 101 *** Got signal Terminated *** 2026-03-09T18:44:40.623 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:40 vm00 bash[34680]: debug 2026-03-09T18:44:40.396+0000 7faa5e8a2700 -1 osd.3 101 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:44:40.623 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:40 vm00 bash[76278]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-osd-3 2026-03-09T18:44:40.623 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:40 vm00 bash[69512]: audit 2026-03-09T18:44:38.878328+0000 mgr.y (mgr.44107) 105 : audit [DBG] from='client.44218 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:40.623 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:40 vm00 bash[69512]: audit 2026-03-09T18:44:38.878328+0000 mgr.y (mgr.44107) 105 : audit [DBG] from='client.44218 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:40.623 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:40 vm00 bash[69512]: cephadm 2026-03-09T18:44:38.910154+0000 mgr.y (mgr.44107) 106 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:44:40.623 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:40 vm00 bash[69512]: cephadm 2026-03-09T18:44:38.910154+0000 mgr.y (mgr.44107) 106 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:44:40.623 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:40 vm00 bash[69512]: cephadm 2026-03-09T18:44:38.910181+0000 mgr.y (mgr.44107) 107 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:44:40.623 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:40 vm00 bash[69512]: cephadm 2026-03-09T18:44:38.910181+0000 mgr.y (mgr.44107) 107 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:44:40.623 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:40 vm00 bash[69512]: cephadm 2026-03-09T18:44:38.913840+0000 mgr.y (mgr.44107) 108 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:44:40.623 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:40 vm00 bash[69512]: cephadm 2026-03-09T18:44:38.913840+0000 mgr.y (mgr.44107) 108 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:44:40.623 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:40 vm00 bash[69512]: cephadm 2026-03-09T18:44:38.922634+0000 mgr.y (mgr.44107) 109 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:44:40.623 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:40 vm00 bash[69512]: cephadm 2026-03-09T18:44:38.922634+0000 mgr.y (mgr.44107) 109 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:44:40.623 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:40 vm00 bash[69512]: cephadm 2026-03-09T18:44:38.933120+0000 mgr.y (mgr.44107) 110 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:44:40.623 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:40 vm00 bash[69512]: cephadm 2026-03-09T18:44:38.933120+0000 mgr.y (mgr.44107) 110 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:44:40.623 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:40 vm00 bash[69512]: audit 2026-03-09T18:44:38.941670+0000 mgr.y (mgr.44107) 111 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T18:44:40.623 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:40 vm00 bash[69512]: audit 2026-03-09T18:44:38.941670+0000 mgr.y (mgr.44107) 111 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T18:44:40.623 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:40 vm00 bash[69512]: cephadm 2026-03-09T18:44:38.942762+0000 mgr.y (mgr.44107) 112 : cephadm [INF] Upgrade: osd.3 is safe to restart 2026-03-09T18:44:40.623 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:40 vm00 bash[69512]: cephadm 2026-03-09T18:44:38.942762+0000 mgr.y (mgr.44107) 112 : cephadm [INF] Upgrade: osd.3 is safe to restart 2026-03-09T18:44:40.623 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:40 vm00 bash[69512]: cephadm 2026-03-09T18:44:39.408996+0000 mgr.y (mgr.44107) 113 : cephadm [INF] Upgrade: Updating osd.3 2026-03-09T18:44:40.623 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:40 vm00 bash[69512]: cephadm 2026-03-09T18:44:39.408996+0000 mgr.y (mgr.44107) 113 : cephadm [INF] Upgrade: Updating osd.3 2026-03-09T18:44:40.623 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:40 vm00 bash[69512]: audit 2026-03-09T18:44:39.414267+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:40 vm00 bash[69512]: audit 2026-03-09T18:44:39.414267+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:40 vm00 bash[69512]: audit 2026-03-09T18:44:39.417908+0000 mon.c (mon.1) 130 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:40 vm00 bash[69512]: audit 2026-03-09T18:44:39.417908+0000 mon.c (mon.1) 130 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:40 vm00 bash[69512]: audit 2026-03-09T18:44:39.418930+0000 mon.c (mon.1) 131 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:40 vm00 bash[69512]: audit 2026-03-09T18:44:39.418930+0000 mon.c (mon.1) 131 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:40 vm00 bash[69512]: cluster 2026-03-09T18:44:40.400516+0000 mon.a (mon.0) 166 : cluster [INF] osd.3 marked itself down and dead 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:40 vm00 bash[69512]: cluster 2026-03-09T18:44:40.400516+0000 mon.a (mon.0) 166 : cluster [INF] osd.3 marked itself down and dead 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:40 vm00 bash[65531]: audit 2026-03-09T18:44:38.878328+0000 mgr.y (mgr.44107) 105 : audit [DBG] from='client.44218 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:40 vm00 bash[65531]: audit 2026-03-09T18:44:38.878328+0000 mgr.y (mgr.44107) 105 : audit [DBG] from='client.44218 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:40 vm00 bash[65531]: cephadm 2026-03-09T18:44:38.910154+0000 mgr.y (mgr.44107) 106 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:40 vm00 bash[65531]: cephadm 2026-03-09T18:44:38.910154+0000 mgr.y (mgr.44107) 106 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:40 vm00 bash[65531]: cephadm 2026-03-09T18:44:38.910181+0000 mgr.y (mgr.44107) 107 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:40 vm00 bash[65531]: cephadm 2026-03-09T18:44:38.910181+0000 mgr.y (mgr.44107) 107 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:40 vm00 bash[65531]: cephadm 2026-03-09T18:44:38.913840+0000 mgr.y (mgr.44107) 108 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:40 vm00 bash[65531]: cephadm 2026-03-09T18:44:38.913840+0000 mgr.y (mgr.44107) 108 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:40 vm00 bash[65531]: cephadm 2026-03-09T18:44:38.922634+0000 mgr.y (mgr.44107) 109 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:40 vm00 bash[65531]: cephadm 2026-03-09T18:44:38.922634+0000 mgr.y (mgr.44107) 109 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:40 vm00 bash[65531]: cephadm 2026-03-09T18:44:38.933120+0000 mgr.y (mgr.44107) 110 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:40 vm00 bash[65531]: cephadm 2026-03-09T18:44:38.933120+0000 mgr.y (mgr.44107) 110 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:40 vm00 bash[65531]: audit 2026-03-09T18:44:38.941670+0000 mgr.y (mgr.44107) 111 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:40 vm00 bash[65531]: audit 2026-03-09T18:44:38.941670+0000 mgr.y (mgr.44107) 111 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:40 vm00 bash[65531]: cephadm 2026-03-09T18:44:38.942762+0000 mgr.y (mgr.44107) 112 : cephadm [INF] Upgrade: osd.3 is safe to restart 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:40 vm00 bash[65531]: cephadm 2026-03-09T18:44:38.942762+0000 mgr.y (mgr.44107) 112 : cephadm [INF] Upgrade: osd.3 is safe to restart 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:40 vm00 bash[65531]: cephadm 2026-03-09T18:44:39.408996+0000 mgr.y (mgr.44107) 113 : cephadm [INF] Upgrade: Updating osd.3 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:40 vm00 bash[65531]: cephadm 2026-03-09T18:44:39.408996+0000 mgr.y (mgr.44107) 113 : cephadm [INF] Upgrade: Updating osd.3 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:40 vm00 bash[65531]: audit 2026-03-09T18:44:39.414267+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:40 vm00 bash[65531]: audit 2026-03-09T18:44:39.414267+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:40 vm00 bash[65531]: audit 2026-03-09T18:44:39.417908+0000 mon.c (mon.1) 130 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:40 vm00 bash[65531]: audit 2026-03-09T18:44:39.417908+0000 mon.c (mon.1) 130 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:40 vm00 bash[65531]: audit 2026-03-09T18:44:39.418930+0000 mon.c (mon.1) 131 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:40 vm00 bash[65531]: audit 2026-03-09T18:44:39.418930+0000 mon.c (mon.1) 131 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:40 vm00 bash[65531]: cluster 2026-03-09T18:44:40.400516+0000 mon.a (mon.0) 166 : cluster [INF] osd.3 marked itself down and dead 2026-03-09T18:44:40.624 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:40 vm00 bash[65531]: cluster 2026-03-09T18:44:40.400516+0000 mon.a (mon.0) 166 : cluster [INF] osd.3 marked itself down and dead 2026-03-09T18:44:40.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:40 vm08 bash[46122]: audit 2026-03-09T18:44:38.878328+0000 mgr.y (mgr.44107) 105 : audit [DBG] from='client.44218 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:40.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:40 vm08 bash[46122]: audit 2026-03-09T18:44:38.878328+0000 mgr.y (mgr.44107) 105 : audit [DBG] from='client.44218 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:44:40.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:40 vm08 bash[46122]: cephadm 2026-03-09T18:44:38.910154+0000 mgr.y (mgr.44107) 106 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:44:40.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:40 vm08 bash[46122]: cephadm 2026-03-09T18:44:38.910154+0000 mgr.y (mgr.44107) 106 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:44:40.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:40 vm08 bash[46122]: cephadm 2026-03-09T18:44:38.910181+0000 mgr.y (mgr.44107) 107 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:44:40.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:40 vm08 bash[46122]: cephadm 2026-03-09T18:44:38.910181+0000 mgr.y (mgr.44107) 107 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:44:40.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:40 vm08 bash[46122]: cephadm 2026-03-09T18:44:38.913840+0000 mgr.y (mgr.44107) 108 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:44:40.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:40 vm08 bash[46122]: cephadm 2026-03-09T18:44:38.913840+0000 mgr.y (mgr.44107) 108 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:44:40.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:40 vm08 bash[46122]: cephadm 2026-03-09T18:44:38.922634+0000 mgr.y (mgr.44107) 109 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:44:40.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:40 vm08 bash[46122]: cephadm 2026-03-09T18:44:38.922634+0000 mgr.y (mgr.44107) 109 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:44:40.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:40 vm08 bash[46122]: cephadm 2026-03-09T18:44:38.933120+0000 mgr.y (mgr.44107) 110 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:44:40.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:40 vm08 bash[46122]: cephadm 2026-03-09T18:44:38.933120+0000 mgr.y (mgr.44107) 110 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:44:40.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:40 vm08 bash[46122]: audit 2026-03-09T18:44:38.941670+0000 mgr.y (mgr.44107) 111 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T18:44:40.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:40 vm08 bash[46122]: audit 2026-03-09T18:44:38.941670+0000 mgr.y (mgr.44107) 111 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T18:44:40.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:40 vm08 bash[46122]: cephadm 2026-03-09T18:44:38.942762+0000 mgr.y (mgr.44107) 112 : cephadm [INF] Upgrade: osd.3 is safe to restart 2026-03-09T18:44:40.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:40 vm08 bash[46122]: cephadm 2026-03-09T18:44:38.942762+0000 mgr.y (mgr.44107) 112 : cephadm [INF] Upgrade: osd.3 is safe to restart 2026-03-09T18:44:40.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:40 vm08 bash[46122]: cephadm 2026-03-09T18:44:39.408996+0000 mgr.y (mgr.44107) 113 : cephadm [INF] Upgrade: Updating osd.3 2026-03-09T18:44:40.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:40 vm08 bash[46122]: cephadm 2026-03-09T18:44:39.408996+0000 mgr.y (mgr.44107) 113 : cephadm [INF] Upgrade: Updating osd.3 2026-03-09T18:44:40.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:40 vm08 bash[46122]: audit 2026-03-09T18:44:39.414267+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:40.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:40 vm08 bash[46122]: audit 2026-03-09T18:44:39.414267+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:40.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:40 vm08 bash[46122]: audit 2026-03-09T18:44:39.417908+0000 mon.c (mon.1) 130 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T18:44:40.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:40 vm08 bash[46122]: audit 2026-03-09T18:44:39.417908+0000 mon.c (mon.1) 130 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T18:44:40.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:40 vm08 bash[46122]: audit 2026-03-09T18:44:39.418930+0000 mon.c (mon.1) 131 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:40.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:40 vm08 bash[46122]: audit 2026-03-09T18:44:39.418930+0000 mon.c (mon.1) 131 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:40.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:40 vm08 bash[46122]: cluster 2026-03-09T18:44:40.400516+0000 mon.a (mon.0) 166 : cluster [INF] osd.3 marked itself down and dead 2026-03-09T18:44:40.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:40 vm08 bash[46122]: cluster 2026-03-09T18:44:40.400516+0000 mon.a (mon.0) 166 : cluster [INF] osd.3 marked itself down and dead 2026-03-09T18:44:40.878 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:40 vm00 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.3.service: Deactivated successfully. 2026-03-09T18:44:40.879 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:40 vm00 systemd[1]: Stopped Ceph osd.3 for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:44:41.342 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:41 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:41.342 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:41 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:41.342 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:41 vm00 systemd[1]: Started Ceph osd.3 for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:44:41.342 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:44:41 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:41.343 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:41 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:41.343 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:44:41 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:41.343 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:44:41 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:41.343 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:44:41 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:41.343 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:44:41 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:41.343 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:44:41 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:41.630 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:41 vm00 bash[76489]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:44:41.631 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:41 vm00 bash[76489]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:44:41.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:41 vm08 bash[46122]: cephadm 2026-03-09T18:44:39.420644+0000 mgr.y (mgr.44107) 114 : cephadm [INF] Deploying daemon osd.3 on vm00 2026-03-09T18:44:41.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:41 vm08 bash[46122]: cephadm 2026-03-09T18:44:39.420644+0000 mgr.y (mgr.44107) 114 : cephadm [INF] Deploying daemon osd.3 on vm00 2026-03-09T18:44:41.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:41 vm08 bash[46122]: cluster 2026-03-09T18:44:40.061028+0000 mgr.y (mgr.44107) 115 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:41.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:41 vm08 bash[46122]: cluster 2026-03-09T18:44:40.061028+0000 mgr.y (mgr.44107) 115 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:41.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:41 vm08 bash[46122]: cluster 2026-03-09T18:44:40.413901+0000 mon.a (mon.0) 167 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:44:41.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:41 vm08 bash[46122]: cluster 2026-03-09T18:44:40.413901+0000 mon.a (mon.0) 167 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:44:41.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:41 vm08 bash[46122]: cluster 2026-03-09T18:44:40.459699+0000 mon.a (mon.0) 168 : cluster [DBG] osdmap e102: 8 total, 7 up, 8 in 2026-03-09T18:44:41.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:41 vm08 bash[46122]: cluster 2026-03-09T18:44:40.459699+0000 mon.a (mon.0) 168 : cluster [DBG] osdmap e102: 8 total, 7 up, 8 in 2026-03-09T18:44:41.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:41 vm08 bash[46122]: audit 2026-03-09T18:44:41.160307+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:41.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:41 vm08 bash[46122]: audit 2026-03-09T18:44:41.160307+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:41.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:41 vm08 bash[46122]: audit 2026-03-09T18:44:41.180019+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:41.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:41 vm08 bash[46122]: audit 2026-03-09T18:44:41.180019+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:41.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:41 vm08 bash[46122]: audit 2026-03-09T18:44:41.194986+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:41.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:41 vm08 bash[46122]: audit 2026-03-09T18:44:41.194986+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:41.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:41 vm08 bash[46122]: audit 2026-03-09T18:44:41.197457+0000 mon.c (mon.1) 132 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:41.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:41 vm08 bash[46122]: audit 2026-03-09T18:44:41.197457+0000 mon.c (mon.1) 132 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:41 vm00 bash[65531]: cephadm 2026-03-09T18:44:39.420644+0000 mgr.y (mgr.44107) 114 : cephadm [INF] Deploying daemon osd.3 on vm00 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:41 vm00 bash[65531]: cephadm 2026-03-09T18:44:39.420644+0000 mgr.y (mgr.44107) 114 : cephadm [INF] Deploying daemon osd.3 on vm00 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:41 vm00 bash[65531]: cluster 2026-03-09T18:44:40.061028+0000 mgr.y (mgr.44107) 115 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:41 vm00 bash[65531]: cluster 2026-03-09T18:44:40.061028+0000 mgr.y (mgr.44107) 115 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:41 vm00 bash[65531]: cluster 2026-03-09T18:44:40.413901+0000 mon.a (mon.0) 167 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:41 vm00 bash[65531]: cluster 2026-03-09T18:44:40.413901+0000 mon.a (mon.0) 167 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:41 vm00 bash[65531]: cluster 2026-03-09T18:44:40.459699+0000 mon.a (mon.0) 168 : cluster [DBG] osdmap e102: 8 total, 7 up, 8 in 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:41 vm00 bash[65531]: cluster 2026-03-09T18:44:40.459699+0000 mon.a (mon.0) 168 : cluster [DBG] osdmap e102: 8 total, 7 up, 8 in 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:41 vm00 bash[65531]: audit 2026-03-09T18:44:41.160307+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:41 vm00 bash[65531]: audit 2026-03-09T18:44:41.160307+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:41 vm00 bash[65531]: audit 2026-03-09T18:44:41.180019+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:41 vm00 bash[65531]: audit 2026-03-09T18:44:41.180019+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:41 vm00 bash[65531]: audit 2026-03-09T18:44:41.194986+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:41 vm00 bash[65531]: audit 2026-03-09T18:44:41.194986+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:41 vm00 bash[65531]: audit 2026-03-09T18:44:41.197457+0000 mon.c (mon.1) 132 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:41 vm00 bash[65531]: audit 2026-03-09T18:44:41.197457+0000 mon.c (mon.1) 132 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:41 vm00 bash[69512]: cephadm 2026-03-09T18:44:39.420644+0000 mgr.y (mgr.44107) 114 : cephadm [INF] Deploying daemon osd.3 on vm00 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:41 vm00 bash[69512]: cephadm 2026-03-09T18:44:39.420644+0000 mgr.y (mgr.44107) 114 : cephadm [INF] Deploying daemon osd.3 on vm00 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:41 vm00 bash[69512]: cluster 2026-03-09T18:44:40.061028+0000 mgr.y (mgr.44107) 115 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:41 vm00 bash[69512]: cluster 2026-03-09T18:44:40.061028+0000 mgr.y (mgr.44107) 115 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:41 vm00 bash[69512]: cluster 2026-03-09T18:44:40.413901+0000 mon.a (mon.0) 167 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:41 vm00 bash[69512]: cluster 2026-03-09T18:44:40.413901+0000 mon.a (mon.0) 167 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:41 vm00 bash[69512]: cluster 2026-03-09T18:44:40.459699+0000 mon.a (mon.0) 168 : cluster [DBG] osdmap e102: 8 total, 7 up, 8 in 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:41 vm00 bash[69512]: cluster 2026-03-09T18:44:40.459699+0000 mon.a (mon.0) 168 : cluster [DBG] osdmap e102: 8 total, 7 up, 8 in 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:41 vm00 bash[69512]: audit 2026-03-09T18:44:41.160307+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:41 vm00 bash[69512]: audit 2026-03-09T18:44:41.160307+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:41 vm00 bash[69512]: audit 2026-03-09T18:44:41.180019+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:41 vm00 bash[69512]: audit 2026-03-09T18:44:41.180019+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:41 vm00 bash[69512]: audit 2026-03-09T18:44:41.194986+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:41 vm00 bash[69512]: audit 2026-03-09T18:44:41.194986+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:41 vm00 bash[69512]: audit 2026-03-09T18:44:41.197457+0000 mon.c (mon.1) 132 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:42.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:41 vm00 bash[69512]: audit 2026-03-09T18:44:41.197457+0000 mon.c (mon.1) 132 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:42.629 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:42 vm00 bash[76489]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T18:44:42.629 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:42 vm00 bash[76489]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:44:42.629 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:42 vm00 bash[76489]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:44:42.629 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:42 vm00 bash[76489]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 2026-03-09T18:44:42.629 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:42 vm00 bash[76489]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-c76ad81e-3b4f-42ac-88aa-1597dac1aae0/osd-block-04bdb6c0-c351-4b7e-b364-865748cfae11 --path /var/lib/ceph/osd/ceph-3 --no-mon-config 2026-03-09T18:44:42.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:42 vm08 bash[46122]: audit 2026-03-09T18:44:41.477708+0000 mgr.y (mgr.44107) 116 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:42.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:42 vm08 bash[46122]: audit 2026-03-09T18:44:41.477708+0000 mgr.y (mgr.44107) 116 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:42.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:42 vm08 bash[46122]: cluster 2026-03-09T18:44:41.608602+0000 mon.a (mon.0) 172 : cluster [DBG] osdmap e103: 8 total, 7 up, 8 in 2026-03-09T18:44:42.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:42 vm08 bash[46122]: cluster 2026-03-09T18:44:41.608602+0000 mon.a (mon.0) 172 : cluster [DBG] osdmap e103: 8 total, 7 up, 8 in 2026-03-09T18:44:42.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:42 vm08 bash[46122]: cluster 2026-03-09T18:44:42.061374+0000 mgr.y (mgr.44107) 117 : cluster [DBG] pgmap v32: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:42.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:42 vm08 bash[46122]: cluster 2026-03-09T18:44:42.061374+0000 mgr.y (mgr.44107) 117 : cluster [DBG] pgmap v32: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:43.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:42 vm00 bash[65531]: audit 2026-03-09T18:44:41.477708+0000 mgr.y (mgr.44107) 116 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:43.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:42 vm00 bash[65531]: audit 2026-03-09T18:44:41.477708+0000 mgr.y (mgr.44107) 116 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:43.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:42 vm00 bash[65531]: cluster 2026-03-09T18:44:41.608602+0000 mon.a (mon.0) 172 : cluster [DBG] osdmap e103: 8 total, 7 up, 8 in 2026-03-09T18:44:43.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:42 vm00 bash[65531]: cluster 2026-03-09T18:44:41.608602+0000 mon.a (mon.0) 172 : cluster [DBG] osdmap e103: 8 total, 7 up, 8 in 2026-03-09T18:44:43.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:42 vm00 bash[65531]: cluster 2026-03-09T18:44:42.061374+0000 mgr.y (mgr.44107) 117 : cluster [DBG] pgmap v32: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:43.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:42 vm00 bash[65531]: cluster 2026-03-09T18:44:42.061374+0000 mgr.y (mgr.44107) 117 : cluster [DBG] pgmap v32: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:43.129 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:42 vm00 bash[76489]: Running command: /usr/bin/ln -snf /dev/ceph-c76ad81e-3b4f-42ac-88aa-1597dac1aae0/osd-block-04bdb6c0-c351-4b7e-b364-865748cfae11 /var/lib/ceph/osd/ceph-3/block 2026-03-09T18:44:43.129 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:42 vm00 bash[76489]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block 2026-03-09T18:44:43.129 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:42 vm00 bash[76489]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-3 2026-03-09T18:44:43.129 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:42 vm00 bash[76489]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 2026-03-09T18:44:43.129 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:42 vm00 bash[76489]: --> ceph-volume lvm activate successful for osd ID: 3 2026-03-09T18:44:43.129 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:42 vm00 bash[76849]: debug 2026-03-09T18:44:42.836+0000 7f808809e640 1 -- 192.168.123.100:0/1329032803 <== mon.0 v2:192.168.123.100:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x55e025473680 con 0x55e024832000 2026-03-09T18:44:43.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:42 vm00 bash[69512]: audit 2026-03-09T18:44:41.477708+0000 mgr.y (mgr.44107) 116 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:43.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:42 vm00 bash[69512]: audit 2026-03-09T18:44:41.477708+0000 mgr.y (mgr.44107) 116 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:43.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:42 vm00 bash[69512]: cluster 2026-03-09T18:44:41.608602+0000 mon.a (mon.0) 172 : cluster [DBG] osdmap e103: 8 total, 7 up, 8 in 2026-03-09T18:44:43.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:42 vm00 bash[69512]: cluster 2026-03-09T18:44:41.608602+0000 mon.a (mon.0) 172 : cluster [DBG] osdmap e103: 8 total, 7 up, 8 in 2026-03-09T18:44:43.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:42 vm00 bash[69512]: cluster 2026-03-09T18:44:42.061374+0000 mgr.y (mgr.44107) 117 : cluster [DBG] pgmap v32: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:43.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:42 vm00 bash[69512]: cluster 2026-03-09T18:44:42.061374+0000 mgr.y (mgr.44107) 117 : cluster [DBG] pgmap v32: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:44:43.879 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:43 vm00 bash[76849]: debug 2026-03-09T18:44:43.520+0000 7f808a908740 -1 Falling back to public interface 2026-03-09T18:44:44.879 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:44 vm00 bash[76849]: debug 2026-03-09T18:44:44.488+0000 7f808a908740 -1 osd.3 0 read_superblock omap replica is missing. 2026-03-09T18:44:44.879 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:44 vm00 bash[76849]: debug 2026-03-09T18:44:44.508+0000 7f808a908740 -1 osd.3 101 log_to_monitors true 2026-03-09T18:44:45.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:45 vm08 bash[46122]: cluster 2026-03-09T18:44:44.061756+0000 mgr.y (mgr.44107) 118 : cluster [DBG] pgmap v33: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:44:45.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:45 vm08 bash[46122]: cluster 2026-03-09T18:44:44.061756+0000 mgr.y (mgr.44107) 118 : cluster [DBG] pgmap v33: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:44:45.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:45 vm08 bash[46122]: audit 2026-03-09T18:44:44.513465+0000 mon.a (mon.0) 173 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/1208985627,v1:192.168.123.100:6827/1208985627]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T18:44:45.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:45 vm08 bash[46122]: audit 2026-03-09T18:44:44.513465+0000 mon.a (mon.0) 173 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/1208985627,v1:192.168.123.100:6827/1208985627]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T18:44:45.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:45 vm00 bash[65531]: cluster 2026-03-09T18:44:44.061756+0000 mgr.y (mgr.44107) 118 : cluster [DBG] pgmap v33: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:44:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:45 vm00 bash[65531]: cluster 2026-03-09T18:44:44.061756+0000 mgr.y (mgr.44107) 118 : cluster [DBG] pgmap v33: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:44:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:45 vm00 bash[65531]: audit 2026-03-09T18:44:44.513465+0000 mon.a (mon.0) 173 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/1208985627,v1:192.168.123.100:6827/1208985627]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T18:44:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:45 vm00 bash[65531]: audit 2026-03-09T18:44:44.513465+0000 mon.a (mon.0) 173 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/1208985627,v1:192.168.123.100:6827/1208985627]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T18:44:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:45 vm00 bash[69512]: cluster 2026-03-09T18:44:44.061756+0000 mgr.y (mgr.44107) 118 : cluster [DBG] pgmap v33: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:44:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:45 vm00 bash[69512]: cluster 2026-03-09T18:44:44.061756+0000 mgr.y (mgr.44107) 118 : cluster [DBG] pgmap v33: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:44:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:45 vm00 bash[69512]: audit 2026-03-09T18:44:44.513465+0000 mon.a (mon.0) 173 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/1208985627,v1:192.168.123.100:6827/1208985627]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T18:44:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:45 vm00 bash[69512]: audit 2026-03-09T18:44:44.513465+0000 mon.a (mon.0) 173 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/1208985627,v1:192.168.123.100:6827/1208985627]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T18:44:46.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:46 vm00 bash[65531]: audit 2026-03-09T18:44:45.120894+0000 mon.a (mon.0) 174 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/1208985627,v1:192.168.123.100:6827/1208985627]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T18:44:46.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:46 vm00 bash[65531]: audit 2026-03-09T18:44:45.120894+0000 mon.a (mon.0) 174 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/1208985627,v1:192.168.123.100:6827/1208985627]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T18:44:46.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:46 vm00 bash[65531]: cluster 2026-03-09T18:44:45.125122+0000 mon.a (mon.0) 175 : cluster [DBG] osdmap e104: 8 total, 7 up, 8 in 2026-03-09T18:44:46.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:46 vm00 bash[65531]: cluster 2026-03-09T18:44:45.125122+0000 mon.a (mon.0) 175 : cluster [DBG] osdmap e104: 8 total, 7 up, 8 in 2026-03-09T18:44:46.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:46 vm00 bash[65531]: audit 2026-03-09T18:44:45.129010+0000 mon.a (mon.0) 176 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/1208985627,v1:192.168.123.100:6827/1208985627]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:44:46.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:46 vm00 bash[65531]: audit 2026-03-09T18:44:45.129010+0000 mon.a (mon.0) 176 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/1208985627,v1:192.168.123.100:6827/1208985627]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:44:46.379 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:46 vm00 bash[76849]: debug 2026-03-09T18:44:46.076+0000 7f8081eb2640 -1 osd.3 101 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:44:46.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:46 vm00 bash[69512]: audit 2026-03-09T18:44:45.120894+0000 mon.a (mon.0) 174 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/1208985627,v1:192.168.123.100:6827/1208985627]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T18:44:46.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:46 vm00 bash[69512]: audit 2026-03-09T18:44:45.120894+0000 mon.a (mon.0) 174 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/1208985627,v1:192.168.123.100:6827/1208985627]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T18:44:46.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:46 vm00 bash[69512]: cluster 2026-03-09T18:44:45.125122+0000 mon.a (mon.0) 175 : cluster [DBG] osdmap e104: 8 total, 7 up, 8 in 2026-03-09T18:44:46.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:46 vm00 bash[69512]: cluster 2026-03-09T18:44:45.125122+0000 mon.a (mon.0) 175 : cluster [DBG] osdmap e104: 8 total, 7 up, 8 in 2026-03-09T18:44:46.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:46 vm00 bash[69512]: audit 2026-03-09T18:44:45.129010+0000 mon.a (mon.0) 176 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/1208985627,v1:192.168.123.100:6827/1208985627]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:44:46.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:46 vm00 bash[69512]: audit 2026-03-09T18:44:45.129010+0000 mon.a (mon.0) 176 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/1208985627,v1:192.168.123.100:6827/1208985627]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:44:46.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:46 vm08 bash[46122]: audit 2026-03-09T18:44:45.120894+0000 mon.a (mon.0) 174 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/1208985627,v1:192.168.123.100:6827/1208985627]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T18:44:46.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:46 vm08 bash[46122]: audit 2026-03-09T18:44:45.120894+0000 mon.a (mon.0) 174 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/1208985627,v1:192.168.123.100:6827/1208985627]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T18:44:46.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:46 vm08 bash[46122]: cluster 2026-03-09T18:44:45.125122+0000 mon.a (mon.0) 175 : cluster [DBG] osdmap e104: 8 total, 7 up, 8 in 2026-03-09T18:44:46.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:46 vm08 bash[46122]: cluster 2026-03-09T18:44:45.125122+0000 mon.a (mon.0) 175 : cluster [DBG] osdmap e104: 8 total, 7 up, 8 in 2026-03-09T18:44:46.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:46 vm08 bash[46122]: audit 2026-03-09T18:44:45.129010+0000 mon.a (mon.0) 176 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/1208985627,v1:192.168.123.100:6827/1208985627]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:44:46.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:46 vm08 bash[46122]: audit 2026-03-09T18:44:45.129010+0000 mon.a (mon.0) 176 : audit [INF] from='osd.3 [v2:192.168.123.100:6826/1208985627,v1:192.168.123.100:6827/1208985627]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:44:47.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:47 vm00 bash[65531]: cluster 2026-03-09T18:44:46.062164+0000 mgr.y (mgr.44107) 119 : cluster [DBG] pgmap v35: 161 pgs: 42 active+undersized, 24 active+undersized+degraded, 95 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-09T18:44:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:47 vm00 bash[65531]: cluster 2026-03-09T18:44:46.062164+0000 mgr.y (mgr.44107) 119 : cluster [DBG] pgmap v35: 161 pgs: 42 active+undersized, 24 active+undersized+degraded, 95 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-09T18:44:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:47 vm00 bash[65531]: cluster 2026-03-09T18:44:46.141954+0000 mon.a (mon.0) 177 : cluster [WRN] Health check failed: Degraded data redundancy: 78/627 objects degraded (12.440%), 24 pgs degraded (PG_DEGRADED) 2026-03-09T18:44:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:47 vm00 bash[65531]: cluster 2026-03-09T18:44:46.141954+0000 mon.a (mon.0) 177 : cluster [WRN] Health check failed: Degraded data redundancy: 78/627 objects degraded (12.440%), 24 pgs degraded (PG_DEGRADED) 2026-03-09T18:44:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:47 vm00 bash[65531]: cluster 2026-03-09T18:44:46.142390+0000 mon.a (mon.0) 178 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:44:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:47 vm00 bash[65531]: cluster 2026-03-09T18:44:46.142390+0000 mon.a (mon.0) 178 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:44:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:47 vm00 bash[65531]: cluster 2026-03-09T18:44:46.142402+0000 mon.a (mon.0) 179 : cluster [INF] Cluster is now healthy 2026-03-09T18:44:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:47 vm00 bash[65531]: cluster 2026-03-09T18:44:46.142402+0000 mon.a (mon.0) 179 : cluster [INF] Cluster is now healthy 2026-03-09T18:44:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:47 vm00 bash[65531]: cluster 2026-03-09T18:44:46.156228+0000 mon.a (mon.0) 180 : cluster [INF] osd.3 [v2:192.168.123.100:6826/1208985627,v1:192.168.123.100:6827/1208985627] boot 2026-03-09T18:44:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:47 vm00 bash[65531]: cluster 2026-03-09T18:44:46.156228+0000 mon.a (mon.0) 180 : cluster [INF] osd.3 [v2:192.168.123.100:6826/1208985627,v1:192.168.123.100:6827/1208985627] boot 2026-03-09T18:44:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:47 vm00 bash[65531]: cluster 2026-03-09T18:44:46.156254+0000 mon.a (mon.0) 181 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-09T18:44:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:47 vm00 bash[65531]: cluster 2026-03-09T18:44:46.156254+0000 mon.a (mon.0) 181 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-09T18:44:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:47 vm00 bash[65531]: audit 2026-03-09T18:44:46.156933+0000 mon.c (mon.1) 133 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:44:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:47 vm00 bash[65531]: audit 2026-03-09T18:44:46.156933+0000 mon.c (mon.1) 133 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:44:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:47 vm00 bash[69512]: cluster 2026-03-09T18:44:46.062164+0000 mgr.y (mgr.44107) 119 : cluster [DBG] pgmap v35: 161 pgs: 42 active+undersized, 24 active+undersized+degraded, 95 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-09T18:44:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:47 vm00 bash[69512]: cluster 2026-03-09T18:44:46.062164+0000 mgr.y (mgr.44107) 119 : cluster [DBG] pgmap v35: 161 pgs: 42 active+undersized, 24 active+undersized+degraded, 95 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-09T18:44:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:47 vm00 bash[69512]: cluster 2026-03-09T18:44:46.141954+0000 mon.a (mon.0) 177 : cluster [WRN] Health check failed: Degraded data redundancy: 78/627 objects degraded (12.440%), 24 pgs degraded (PG_DEGRADED) 2026-03-09T18:44:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:47 vm00 bash[69512]: cluster 2026-03-09T18:44:46.141954+0000 mon.a (mon.0) 177 : cluster [WRN] Health check failed: Degraded data redundancy: 78/627 objects degraded (12.440%), 24 pgs degraded (PG_DEGRADED) 2026-03-09T18:44:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:47 vm00 bash[69512]: cluster 2026-03-09T18:44:46.142390+0000 mon.a (mon.0) 178 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:44:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:47 vm00 bash[69512]: cluster 2026-03-09T18:44:46.142390+0000 mon.a (mon.0) 178 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:44:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:47 vm00 bash[69512]: cluster 2026-03-09T18:44:46.142402+0000 mon.a (mon.0) 179 : cluster [INF] Cluster is now healthy 2026-03-09T18:44:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:47 vm00 bash[69512]: cluster 2026-03-09T18:44:46.142402+0000 mon.a (mon.0) 179 : cluster [INF] Cluster is now healthy 2026-03-09T18:44:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:47 vm00 bash[69512]: cluster 2026-03-09T18:44:46.156228+0000 mon.a (mon.0) 180 : cluster [INF] osd.3 [v2:192.168.123.100:6826/1208985627,v1:192.168.123.100:6827/1208985627] boot 2026-03-09T18:44:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:47 vm00 bash[69512]: cluster 2026-03-09T18:44:46.156228+0000 mon.a (mon.0) 180 : cluster [INF] osd.3 [v2:192.168.123.100:6826/1208985627,v1:192.168.123.100:6827/1208985627] boot 2026-03-09T18:44:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:47 vm00 bash[69512]: cluster 2026-03-09T18:44:46.156254+0000 mon.a (mon.0) 181 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-09T18:44:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:47 vm00 bash[69512]: cluster 2026-03-09T18:44:46.156254+0000 mon.a (mon.0) 181 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-09T18:44:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:47 vm00 bash[69512]: audit 2026-03-09T18:44:46.156933+0000 mon.c (mon.1) 133 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:44:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:47 vm00 bash[69512]: audit 2026-03-09T18:44:46.156933+0000 mon.c (mon.1) 133 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:44:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:47 vm08 bash[46122]: cluster 2026-03-09T18:44:46.062164+0000 mgr.y (mgr.44107) 119 : cluster [DBG] pgmap v35: 161 pgs: 42 active+undersized, 24 active+undersized+degraded, 95 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-09T18:44:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:47 vm08 bash[46122]: cluster 2026-03-09T18:44:46.062164+0000 mgr.y (mgr.44107) 119 : cluster [DBG] pgmap v35: 161 pgs: 42 active+undersized, 24 active+undersized+degraded, 95 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-09T18:44:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:47 vm08 bash[46122]: cluster 2026-03-09T18:44:46.141954+0000 mon.a (mon.0) 177 : cluster [WRN] Health check failed: Degraded data redundancy: 78/627 objects degraded (12.440%), 24 pgs degraded (PG_DEGRADED) 2026-03-09T18:44:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:47 vm08 bash[46122]: cluster 2026-03-09T18:44:46.141954+0000 mon.a (mon.0) 177 : cluster [WRN] Health check failed: Degraded data redundancy: 78/627 objects degraded (12.440%), 24 pgs degraded (PG_DEGRADED) 2026-03-09T18:44:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:47 vm08 bash[46122]: cluster 2026-03-09T18:44:46.142390+0000 mon.a (mon.0) 178 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:44:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:47 vm08 bash[46122]: cluster 2026-03-09T18:44:46.142390+0000 mon.a (mon.0) 178 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:44:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:47 vm08 bash[46122]: cluster 2026-03-09T18:44:46.142402+0000 mon.a (mon.0) 179 : cluster [INF] Cluster is now healthy 2026-03-09T18:44:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:47 vm08 bash[46122]: cluster 2026-03-09T18:44:46.142402+0000 mon.a (mon.0) 179 : cluster [INF] Cluster is now healthy 2026-03-09T18:44:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:47 vm08 bash[46122]: cluster 2026-03-09T18:44:46.156228+0000 mon.a (mon.0) 180 : cluster [INF] osd.3 [v2:192.168.123.100:6826/1208985627,v1:192.168.123.100:6827/1208985627] boot 2026-03-09T18:44:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:47 vm08 bash[46122]: cluster 2026-03-09T18:44:46.156228+0000 mon.a (mon.0) 180 : cluster [INF] osd.3 [v2:192.168.123.100:6826/1208985627,v1:192.168.123.100:6827/1208985627] boot 2026-03-09T18:44:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:47 vm08 bash[46122]: cluster 2026-03-09T18:44:46.156254+0000 mon.a (mon.0) 181 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-09T18:44:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:47 vm08 bash[46122]: cluster 2026-03-09T18:44:46.156254+0000 mon.a (mon.0) 181 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-09T18:44:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:47 vm08 bash[46122]: audit 2026-03-09T18:44:46.156933+0000 mon.c (mon.1) 133 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:44:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:47 vm08 bash[46122]: audit 2026-03-09T18:44:46.156933+0000 mon.c (mon.1) 133 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T18:44:48.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:48 vm08 bash[46122]: cluster 2026-03-09T18:44:46.064948+0000 osd.3 (osd.3) 1 : cluster [WRN] OSD bench result of 24569.930798 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.3. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T18:44:48.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:48 vm08 bash[46122]: cluster 2026-03-09T18:44:46.064948+0000 osd.3 (osd.3) 1 : cluster [WRN] OSD bench result of 24569.930798 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.3. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T18:44:48.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:48 vm08 bash[46122]: cluster 2026-03-09T18:44:47.165884+0000 mon.a (mon.0) 182 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T18:44:48.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:48 vm08 bash[46122]: cluster 2026-03-09T18:44:47.165884+0000 mon.a (mon.0) 182 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T18:44:48.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:48 vm08 bash[46122]: audit 2026-03-09T18:44:47.836323+0000 mon.a (mon.0) 183 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:48.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:48 vm08 bash[46122]: audit 2026-03-09T18:44:47.836323+0000 mon.a (mon.0) 183 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:48.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:48 vm08 bash[46122]: audit 2026-03-09T18:44:47.844654+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:48.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:48 vm08 bash[46122]: audit 2026-03-09T18:44:47.844654+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:48.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:48 vm08 bash[46122]: audit 2026-03-09T18:44:48.103072+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:48.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:48 vm08 bash[46122]: audit 2026-03-09T18:44:48.103072+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:48.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:48 vm08 bash[46122]: audit 2026-03-09T18:44:48.105722+0000 mon.c (mon.1) 134 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:44:48.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:48 vm08 bash[46122]: audit 2026-03-09T18:44:48.105722+0000 mon.c (mon.1) 134 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:44:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:48 vm00 bash[65531]: cluster 2026-03-09T18:44:46.064948+0000 osd.3 (osd.3) 1 : cluster [WRN] OSD bench result of 24569.930798 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.3. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T18:44:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:48 vm00 bash[65531]: cluster 2026-03-09T18:44:46.064948+0000 osd.3 (osd.3) 1 : cluster [WRN] OSD bench result of 24569.930798 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.3. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T18:44:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:48 vm00 bash[65531]: cluster 2026-03-09T18:44:47.165884+0000 mon.a (mon.0) 182 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T18:44:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:48 vm00 bash[65531]: cluster 2026-03-09T18:44:47.165884+0000 mon.a (mon.0) 182 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T18:44:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:48 vm00 bash[65531]: audit 2026-03-09T18:44:47.836323+0000 mon.a (mon.0) 183 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:48 vm00 bash[65531]: audit 2026-03-09T18:44:47.836323+0000 mon.a (mon.0) 183 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:48 vm00 bash[65531]: audit 2026-03-09T18:44:47.844654+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:48 vm00 bash[65531]: audit 2026-03-09T18:44:47.844654+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:48 vm00 bash[65531]: audit 2026-03-09T18:44:48.103072+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:48 vm00 bash[65531]: audit 2026-03-09T18:44:48.103072+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:48 vm00 bash[65531]: audit 2026-03-09T18:44:48.105722+0000 mon.c (mon.1) 134 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:44:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:48 vm00 bash[65531]: audit 2026-03-09T18:44:48.105722+0000 mon.c (mon.1) 134 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:44:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:48 vm00 bash[69512]: cluster 2026-03-09T18:44:46.064948+0000 osd.3 (osd.3) 1 : cluster [WRN] OSD bench result of 24569.930798 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.3. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T18:44:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:48 vm00 bash[69512]: cluster 2026-03-09T18:44:46.064948+0000 osd.3 (osd.3) 1 : cluster [WRN] OSD bench result of 24569.930798 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.3. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T18:44:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:48 vm00 bash[69512]: cluster 2026-03-09T18:44:47.165884+0000 mon.a (mon.0) 182 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T18:44:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:48 vm00 bash[69512]: cluster 2026-03-09T18:44:47.165884+0000 mon.a (mon.0) 182 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T18:44:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:48 vm00 bash[69512]: audit 2026-03-09T18:44:47.836323+0000 mon.a (mon.0) 183 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:48 vm00 bash[69512]: audit 2026-03-09T18:44:47.836323+0000 mon.a (mon.0) 183 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:48 vm00 bash[69512]: audit 2026-03-09T18:44:47.844654+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:48 vm00 bash[69512]: audit 2026-03-09T18:44:47.844654+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:48 vm00 bash[69512]: audit 2026-03-09T18:44:48.103072+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:48 vm00 bash[69512]: audit 2026-03-09T18:44:48.103072+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:48 vm00 bash[69512]: audit 2026-03-09T18:44:48.105722+0000 mon.c (mon.1) 134 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:44:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:48 vm00 bash[69512]: audit 2026-03-09T18:44:48.105722+0000 mon.c (mon.1) 134 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:44:49.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:49 vm08 bash[46122]: cluster 2026-03-09T18:44:48.062518+0000 mgr.y (mgr.44107) 120 : cluster [DBG] pgmap v38: 161 pgs: 42 active+undersized, 24 active+undersized+degraded, 95 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-09T18:44:49.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:49 vm08 bash[46122]: cluster 2026-03-09T18:44:48.062518+0000 mgr.y (mgr.44107) 120 : cluster [DBG] pgmap v38: 161 pgs: 42 active+undersized, 24 active+undersized+degraded, 95 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-09T18:44:49.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:49 vm08 bash[46122]: audit 2026-03-09T18:44:48.472131+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:49.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:49 vm08 bash[46122]: audit 2026-03-09T18:44:48.472131+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:49.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:49 vm08 bash[46122]: audit 2026-03-09T18:44:48.485569+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:49.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:49 vm08 bash[46122]: audit 2026-03-09T18:44:48.485569+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:49.525 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:49 vm00 bash[65531]: cluster 2026-03-09T18:44:48.062518+0000 mgr.y (mgr.44107) 120 : cluster [DBG] pgmap v38: 161 pgs: 42 active+undersized, 24 active+undersized+degraded, 95 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-09T18:44:49.525 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:49 vm00 bash[65531]: cluster 2026-03-09T18:44:48.062518+0000 mgr.y (mgr.44107) 120 : cluster [DBG] pgmap v38: 161 pgs: 42 active+undersized, 24 active+undersized+degraded, 95 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-09T18:44:49.525 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:49 vm00 bash[65531]: audit 2026-03-09T18:44:48.472131+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:49.525 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:49 vm00 bash[65531]: audit 2026-03-09T18:44:48.472131+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:49.525 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:49 vm00 bash[65531]: audit 2026-03-09T18:44:48.485569+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:49.525 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:49 vm00 bash[65531]: audit 2026-03-09T18:44:48.485569+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:49.525 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:49 vm00 bash[69512]: cluster 2026-03-09T18:44:48.062518+0000 mgr.y (mgr.44107) 120 : cluster [DBG] pgmap v38: 161 pgs: 42 active+undersized, 24 active+undersized+degraded, 95 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-09T18:44:49.525 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:49 vm00 bash[69512]: cluster 2026-03-09T18:44:48.062518+0000 mgr.y (mgr.44107) 120 : cluster [DBG] pgmap v38: 161 pgs: 42 active+undersized, 24 active+undersized+degraded, 95 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-09T18:44:49.525 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:49 vm00 bash[69512]: audit 2026-03-09T18:44:48.472131+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:49.525 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:49 vm00 bash[69512]: audit 2026-03-09T18:44:48.472131+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:49.525 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:49 vm00 bash[69512]: audit 2026-03-09T18:44:48.485569+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:49.525 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:49 vm00 bash[69512]: audit 2026-03-09T18:44:48.485569+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:49.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:44:49 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:44:49] "GET /metrics HTTP/1.1" 200 37532 "" "Prometheus/2.51.0" 2026-03-09T18:44:51.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:51 vm08 bash[46122]: cluster 2026-03-09T18:44:50.063055+0000 mgr.y (mgr.44107) 121 : cluster [DBG] pgmap v39: 161 pgs: 17 active+undersized, 10 active+undersized+degraded, 134 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 34/627 objects degraded (5.423%) 2026-03-09T18:44:51.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:51 vm08 bash[46122]: cluster 2026-03-09T18:44:50.063055+0000 mgr.y (mgr.44107) 121 : cluster [DBG] pgmap v39: 161 pgs: 17 active+undersized, 10 active+undersized+degraded, 134 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 34/627 objects degraded (5.423%) 2026-03-09T18:44:51.480 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:51 vm00 bash[65531]: cluster 2026-03-09T18:44:50.063055+0000 mgr.y (mgr.44107) 121 : cluster [DBG] pgmap v39: 161 pgs: 17 active+undersized, 10 active+undersized+degraded, 134 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 34/627 objects degraded (5.423%) 2026-03-09T18:44:51.480 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:51 vm00 bash[65531]: cluster 2026-03-09T18:44:50.063055+0000 mgr.y (mgr.44107) 121 : cluster [DBG] pgmap v39: 161 pgs: 17 active+undersized, 10 active+undersized+degraded, 134 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 34/627 objects degraded (5.423%) 2026-03-09T18:44:51.480 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:51 vm00 bash[69512]: cluster 2026-03-09T18:44:50.063055+0000 mgr.y (mgr.44107) 121 : cluster [DBG] pgmap v39: 161 pgs: 17 active+undersized, 10 active+undersized+degraded, 134 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 34/627 objects degraded (5.423%) 2026-03-09T18:44:51.480 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:51 vm00 bash[69512]: cluster 2026-03-09T18:44:50.063055+0000 mgr.y (mgr.44107) 121 : cluster [DBG] pgmap v39: 161 pgs: 17 active+undersized, 10 active+undersized+degraded, 134 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 34/627 objects degraded (5.423%) 2026-03-09T18:44:52.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:52 vm08 bash[46122]: cluster 2026-03-09T18:44:52.167752+0000 mon.a (mon.0) 188 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 34/627 objects degraded (5.423%), 10 pgs degraded) 2026-03-09T18:44:52.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:52 vm08 bash[46122]: cluster 2026-03-09T18:44:52.167752+0000 mon.a (mon.0) 188 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 34/627 objects degraded (5.423%), 10 pgs degraded) 2026-03-09T18:44:52.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:52 vm08 bash[46122]: cluster 2026-03-09T18:44:52.167768+0000 mon.a (mon.0) 189 : cluster [INF] Cluster is now healthy 2026-03-09T18:44:52.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:52 vm08 bash[46122]: cluster 2026-03-09T18:44:52.167768+0000 mon.a (mon.0) 189 : cluster [INF] Cluster is now healthy 2026-03-09T18:44:52.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:52 vm00 bash[65531]: cluster 2026-03-09T18:44:52.167752+0000 mon.a (mon.0) 188 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 34/627 objects degraded (5.423%), 10 pgs degraded) 2026-03-09T18:44:52.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:52 vm00 bash[65531]: cluster 2026-03-09T18:44:52.167752+0000 mon.a (mon.0) 188 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 34/627 objects degraded (5.423%), 10 pgs degraded) 2026-03-09T18:44:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:52 vm00 bash[65531]: cluster 2026-03-09T18:44:52.167768+0000 mon.a (mon.0) 189 : cluster [INF] Cluster is now healthy 2026-03-09T18:44:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:52 vm00 bash[65531]: cluster 2026-03-09T18:44:52.167768+0000 mon.a (mon.0) 189 : cluster [INF] Cluster is now healthy 2026-03-09T18:44:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:52 vm00 bash[69512]: cluster 2026-03-09T18:44:52.167752+0000 mon.a (mon.0) 188 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 34/627 objects degraded (5.423%), 10 pgs degraded) 2026-03-09T18:44:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:52 vm00 bash[69512]: cluster 2026-03-09T18:44:52.167752+0000 mon.a (mon.0) 188 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 34/627 objects degraded (5.423%), 10 pgs degraded) 2026-03-09T18:44:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:52 vm00 bash[69512]: cluster 2026-03-09T18:44:52.167768+0000 mon.a (mon.0) 189 : cluster [INF] Cluster is now healthy 2026-03-09T18:44:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:52 vm00 bash[69512]: cluster 2026-03-09T18:44:52.167768+0000 mon.a (mon.0) 189 : cluster [INF] Cluster is now healthy 2026-03-09T18:44:53.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:53 vm08 bash[46122]: audit 2026-03-09T18:44:51.484096+0000 mgr.y (mgr.44107) 122 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:53.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:53 vm08 bash[46122]: audit 2026-03-09T18:44:51.484096+0000 mgr.y (mgr.44107) 122 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:53.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:53 vm08 bash[46122]: cluster 2026-03-09T18:44:52.063626+0000 mgr.y (mgr.44107) 123 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:44:53.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:53 vm08 bash[46122]: cluster 2026-03-09T18:44:52.063626+0000 mgr.y (mgr.44107) 123 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:44:53.491 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:53 vm00 bash[65531]: audit 2026-03-09T18:44:51.484096+0000 mgr.y (mgr.44107) 122 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:53.491 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:53 vm00 bash[65531]: audit 2026-03-09T18:44:51.484096+0000 mgr.y (mgr.44107) 122 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:53.491 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:53 vm00 bash[65531]: cluster 2026-03-09T18:44:52.063626+0000 mgr.y (mgr.44107) 123 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:44:53.491 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:53 vm00 bash[65531]: cluster 2026-03-09T18:44:52.063626+0000 mgr.y (mgr.44107) 123 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:44:53.491 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:53 vm00 bash[69512]: audit 2026-03-09T18:44:51.484096+0000 mgr.y (mgr.44107) 122 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:53.491 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:53 vm00 bash[69512]: audit 2026-03-09T18:44:51.484096+0000 mgr.y (mgr.44107) 122 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:44:53.491 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:53 vm00 bash[69512]: cluster 2026-03-09T18:44:52.063626+0000 mgr.y (mgr.44107) 123 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:44:53.491 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:53 vm00 bash[69512]: cluster 2026-03-09T18:44:52.063626+0000 mgr.y (mgr.44107) 123 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:44:55.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: cluster 2026-03-09T18:44:54.063928+0000 mgr.y (mgr.44107) 124 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: cluster 2026-03-09T18:44:54.063928+0000 mgr.y (mgr.44107) 124 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: audit 2026-03-09T18:44:54.244352+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: audit 2026-03-09T18:44:54.244352+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: audit 2026-03-09T18:44:54.253769+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: audit 2026-03-09T18:44:54.253769+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: audit 2026-03-09T18:44:54.256140+0000 mon.c (mon.1) 135 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: audit 2026-03-09T18:44:54.256140+0000 mon.c (mon.1) 135 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: audit 2026-03-09T18:44:54.257329+0000 mon.c (mon.1) 136 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: audit 2026-03-09T18:44:54.257329+0000 mon.c (mon.1) 136 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: audit 2026-03-09T18:44:54.263400+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: audit 2026-03-09T18:44:54.263400+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: audit 2026-03-09T18:44:54.315192+0000 mon.c (mon.1) 137 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: audit 2026-03-09T18:44:54.315192+0000 mon.c (mon.1) 137 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: audit 2026-03-09T18:44:54.317445+0000 mon.c (mon.1) 138 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: audit 2026-03-09T18:44:54.317445+0000 mon.c (mon.1) 138 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: audit 2026-03-09T18:44:54.319099+0000 mon.c (mon.1) 139 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: audit 2026-03-09T18:44:54.319099+0000 mon.c (mon.1) 139 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: audit 2026-03-09T18:44:54.320381+0000 mon.c (mon.1) 140 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: audit 2026-03-09T18:44:54.320381+0000 mon.c (mon.1) 140 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: audit 2026-03-09T18:44:54.321889+0000 mon.c (mon.1) 141 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: audit 2026-03-09T18:44:54.321889+0000 mon.c (mon.1) 141 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: audit 2026-03-09T18:44:54.322074+0000 mgr.y (mgr.44107) 125 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: audit 2026-03-09T18:44:54.322074+0000 mgr.y (mgr.44107) 125 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: cephadm 2026-03-09T18:44:54.322954+0000 mgr.y (mgr.44107) 126 : cephadm [INF] Upgrade: osd.2 is safe to restart 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: cephadm 2026-03-09T18:44:54.322954+0000 mgr.y (mgr.44107) 126 : cephadm [INF] Upgrade: osd.2 is safe to restart 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: audit 2026-03-09T18:44:54.815829+0000 mon.a (mon.0) 193 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: audit 2026-03-09T18:44:54.815829+0000 mon.a (mon.0) 193 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: audit 2026-03-09T18:44:54.820692+0000 mon.c (mon.1) 142 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: audit 2026-03-09T18:44:54.820692+0000 mon.c (mon.1) 142 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: audit 2026-03-09T18:44:54.821682+0000 mon.c (mon.1) 143 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:55 vm00 bash[65531]: audit 2026-03-09T18:44:54.821682+0000 mon.c (mon.1) 143 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: cluster 2026-03-09T18:44:54.063928+0000 mgr.y (mgr.44107) 124 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: cluster 2026-03-09T18:44:54.063928+0000 mgr.y (mgr.44107) 124 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: audit 2026-03-09T18:44:54.244352+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: audit 2026-03-09T18:44:54.244352+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: audit 2026-03-09T18:44:54.253769+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: audit 2026-03-09T18:44:54.253769+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: audit 2026-03-09T18:44:54.256140+0000 mon.c (mon.1) 135 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: audit 2026-03-09T18:44:54.256140+0000 mon.c (mon.1) 135 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: audit 2026-03-09T18:44:54.257329+0000 mon.c (mon.1) 136 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: audit 2026-03-09T18:44:54.257329+0000 mon.c (mon.1) 136 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: audit 2026-03-09T18:44:54.263400+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: audit 2026-03-09T18:44:54.263400+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: audit 2026-03-09T18:44:54.315192+0000 mon.c (mon.1) 137 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: audit 2026-03-09T18:44:54.315192+0000 mon.c (mon.1) 137 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: audit 2026-03-09T18:44:54.317445+0000 mon.c (mon.1) 138 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: audit 2026-03-09T18:44:54.317445+0000 mon.c (mon.1) 138 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: audit 2026-03-09T18:44:54.319099+0000 mon.c (mon.1) 139 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: audit 2026-03-09T18:44:54.319099+0000 mon.c (mon.1) 139 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: audit 2026-03-09T18:44:54.320381+0000 mon.c (mon.1) 140 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: audit 2026-03-09T18:44:54.320381+0000 mon.c (mon.1) 140 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: audit 2026-03-09T18:44:54.321889+0000 mon.c (mon.1) 141 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T18:44:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: audit 2026-03-09T18:44:54.321889+0000 mon.c (mon.1) 141 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T18:44:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: audit 2026-03-09T18:44:54.322074+0000 mgr.y (mgr.44107) 125 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T18:44:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: audit 2026-03-09T18:44:54.322074+0000 mgr.y (mgr.44107) 125 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T18:44:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: cephadm 2026-03-09T18:44:54.322954+0000 mgr.y (mgr.44107) 126 : cephadm [INF] Upgrade: osd.2 is safe to restart 2026-03-09T18:44:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: cephadm 2026-03-09T18:44:54.322954+0000 mgr.y (mgr.44107) 126 : cephadm [INF] Upgrade: osd.2 is safe to restart 2026-03-09T18:44:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: audit 2026-03-09T18:44:54.815829+0000 mon.a (mon.0) 193 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: audit 2026-03-09T18:44:54.815829+0000 mon.a (mon.0) 193 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: audit 2026-03-09T18:44:54.820692+0000 mon.c (mon.1) 142 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T18:44:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: audit 2026-03-09T18:44:54.820692+0000 mon.c (mon.1) 142 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T18:44:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: audit 2026-03-09T18:44:54.821682+0000 mon.c (mon.1) 143 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:55.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:55 vm00 bash[69512]: audit 2026-03-09T18:44:54.821682+0000 mon.c (mon.1) 143 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: cluster 2026-03-09T18:44:54.063928+0000 mgr.y (mgr.44107) 124 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:44:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: cluster 2026-03-09T18:44:54.063928+0000 mgr.y (mgr.44107) 124 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:44:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: audit 2026-03-09T18:44:54.244352+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: audit 2026-03-09T18:44:54.244352+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: audit 2026-03-09T18:44:54.253769+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: audit 2026-03-09T18:44:54.253769+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: audit 2026-03-09T18:44:54.256140+0000 mon.c (mon.1) 135 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: audit 2026-03-09T18:44:54.256140+0000 mon.c (mon.1) 135 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: audit 2026-03-09T18:44:54.257329+0000 mon.c (mon.1) 136 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: audit 2026-03-09T18:44:54.257329+0000 mon.c (mon.1) 136 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:44:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: audit 2026-03-09T18:44:54.263400+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: audit 2026-03-09T18:44:54.263400+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: audit 2026-03-09T18:44:54.315192+0000 mon.c (mon.1) 137 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: audit 2026-03-09T18:44:54.315192+0000 mon.c (mon.1) 137 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: audit 2026-03-09T18:44:54.317445+0000 mon.c (mon.1) 138 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: audit 2026-03-09T18:44:54.317445+0000 mon.c (mon.1) 138 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: audit 2026-03-09T18:44:54.319099+0000 mon.c (mon.1) 139 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: audit 2026-03-09T18:44:54.319099+0000 mon.c (mon.1) 139 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: audit 2026-03-09T18:44:54.320381+0000 mon.c (mon.1) 140 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: audit 2026-03-09T18:44:54.320381+0000 mon.c (mon.1) 140 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:44:55.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: audit 2026-03-09T18:44:54.321889+0000 mon.c (mon.1) 141 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T18:44:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: audit 2026-03-09T18:44:54.321889+0000 mon.c (mon.1) 141 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T18:44:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: audit 2026-03-09T18:44:54.322074+0000 mgr.y (mgr.44107) 125 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T18:44:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: audit 2026-03-09T18:44:54.322074+0000 mgr.y (mgr.44107) 125 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T18:44:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: cephadm 2026-03-09T18:44:54.322954+0000 mgr.y (mgr.44107) 126 : cephadm [INF] Upgrade: osd.2 is safe to restart 2026-03-09T18:44:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: cephadm 2026-03-09T18:44:54.322954+0000 mgr.y (mgr.44107) 126 : cephadm [INF] Upgrade: osd.2 is safe to restart 2026-03-09T18:44:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: audit 2026-03-09T18:44:54.815829+0000 mon.a (mon.0) 193 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: audit 2026-03-09T18:44:54.815829+0000 mon.a (mon.0) 193 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: audit 2026-03-09T18:44:54.820692+0000 mon.c (mon.1) 142 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T18:44:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: audit 2026-03-09T18:44:54.820692+0000 mon.c (mon.1) 142 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T18:44:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: audit 2026-03-09T18:44:54.821682+0000 mon.c (mon.1) 143 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:55.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:55 vm08 bash[46122]: audit 2026-03-09T18:44:54.821682+0000 mon.c (mon.1) 143 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:44:56.783 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:44:56 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:56.783 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:44:56 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:56.783 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:44:56 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:56.783 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:56 vm00 bash[65531]: cephadm 2026-03-09T18:44:54.809780+0000 mgr.y (mgr.44107) 127 : cephadm [INF] Upgrade: Updating osd.2 2026-03-09T18:44:56.783 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:56 vm00 bash[65531]: cephadm 2026-03-09T18:44:54.809780+0000 mgr.y (mgr.44107) 127 : cephadm [INF] Upgrade: Updating osd.2 2026-03-09T18:44:56.783 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:56 vm00 bash[65531]: cephadm 2026-03-09T18:44:54.823467+0000 mgr.y (mgr.44107) 128 : cephadm [INF] Deploying daemon osd.2 on vm00 2026-03-09T18:44:56.783 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:56 vm00 bash[65531]: cephadm 2026-03-09T18:44:54.823467+0000 mgr.y (mgr.44107) 128 : cephadm [INF] Deploying daemon osd.2 on vm00 2026-03-09T18:44:56.783 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:56 vm00 bash[65531]: cluster 2026-03-09T18:44:56.064491+0000 mgr.y (mgr.44107) 129 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 516 B/s rd, 0 op/s 2026-03-09T18:44:56.783 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:56 vm00 bash[65531]: cluster 2026-03-09T18:44:56.064491+0000 mgr.y (mgr.44107) 129 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 516 B/s rd, 0 op/s 2026-03-09T18:44:56.784 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:56 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:56.784 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:56 vm00 bash[69512]: cephadm 2026-03-09T18:44:54.809780+0000 mgr.y (mgr.44107) 127 : cephadm [INF] Upgrade: Updating osd.2 2026-03-09T18:44:56.784 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:56 vm00 bash[69512]: cephadm 2026-03-09T18:44:54.809780+0000 mgr.y (mgr.44107) 127 : cephadm [INF] Upgrade: Updating osd.2 2026-03-09T18:44:56.784 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:56 vm00 bash[69512]: cephadm 2026-03-09T18:44:54.823467+0000 mgr.y (mgr.44107) 128 : cephadm [INF] Deploying daemon osd.2 on vm00 2026-03-09T18:44:56.784 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:56 vm00 bash[69512]: cephadm 2026-03-09T18:44:54.823467+0000 mgr.y (mgr.44107) 128 : cephadm [INF] Deploying daemon osd.2 on vm00 2026-03-09T18:44:56.784 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:56 vm00 bash[69512]: cluster 2026-03-09T18:44:56.064491+0000 mgr.y (mgr.44107) 129 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 516 B/s rd, 0 op/s 2026-03-09T18:44:56.784 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:56 vm00 bash[69512]: cluster 2026-03-09T18:44:56.064491+0000 mgr.y (mgr.44107) 129 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 516 B/s rd, 0 op/s 2026-03-09T18:44:56.784 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:56 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:56.784 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:44:56 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:56.784 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:44:56 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:56.784 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:56 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:56.784 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:44:56 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:56.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:56 vm08 bash[46122]: cephadm 2026-03-09T18:44:54.809780+0000 mgr.y (mgr.44107) 127 : cephadm [INF] Upgrade: Updating osd.2 2026-03-09T18:44:56.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:56 vm08 bash[46122]: cephadm 2026-03-09T18:44:54.809780+0000 mgr.y (mgr.44107) 127 : cephadm [INF] Upgrade: Updating osd.2 2026-03-09T18:44:56.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:56 vm08 bash[46122]: cephadm 2026-03-09T18:44:54.823467+0000 mgr.y (mgr.44107) 128 : cephadm [INF] Deploying daemon osd.2 on vm00 2026-03-09T18:44:56.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:56 vm08 bash[46122]: cephadm 2026-03-09T18:44:54.823467+0000 mgr.y (mgr.44107) 128 : cephadm [INF] Deploying daemon osd.2 on vm00 2026-03-09T18:44:56.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:56 vm08 bash[46122]: cluster 2026-03-09T18:44:56.064491+0000 mgr.y (mgr.44107) 129 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 516 B/s rd, 0 op/s 2026-03-09T18:44:56.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:56 vm08 bash[46122]: cluster 2026-03-09T18:44:56.064491+0000 mgr.y (mgr.44107) 129 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 516 B/s rd, 0 op/s 2026-03-09T18:44:57.129 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:44:56 vm00 systemd[1]: Stopping Ceph osd.2 for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:44:57.129 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:44:56 vm00 bash[31464]: debug 2026-03-09T18:44:56.820+0000 7f09404fa700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T18:44:57.129 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:44:56 vm00 bash[31464]: debug 2026-03-09T18:44:56.820+0000 7f09404fa700 -1 osd.2 106 *** Got signal Terminated *** 2026-03-09T18:44:57.129 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:44:56 vm00 bash[31464]: debug 2026-03-09T18:44:56.820+0000 7f09404fa700 -1 osd.2 106 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:44:57.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:57 vm00 bash[65531]: cluster 2026-03-09T18:44:56.824546+0000 mon.a (mon.0) 194 : cluster [INF] osd.2 marked itself down and dead 2026-03-09T18:44:57.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:57 vm00 bash[65531]: cluster 2026-03-09T18:44:56.824546+0000 mon.a (mon.0) 194 : cluster [INF] osd.2 marked itself down and dead 2026-03-09T18:44:57.879 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:44:57 vm00 bash[81076]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-osd-2 2026-03-09T18:44:57.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:57 vm00 bash[69512]: cluster 2026-03-09T18:44:56.824546+0000 mon.a (mon.0) 194 : cluster [INF] osd.2 marked itself down and dead 2026-03-09T18:44:57.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:57 vm00 bash[69512]: cluster 2026-03-09T18:44:56.824546+0000 mon.a (mon.0) 194 : cluster [INF] osd.2 marked itself down and dead 2026-03-09T18:44:57.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:57 vm08 bash[46122]: cluster 2026-03-09T18:44:56.824546+0000 mon.a (mon.0) 194 : cluster [INF] osd.2 marked itself down and dead 2026-03-09T18:44:57.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:57 vm08 bash[46122]: cluster 2026-03-09T18:44:56.824546+0000 mon.a (mon.0) 194 : cluster [INF] osd.2 marked itself down and dead 2026-03-09T18:44:58.236 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:58 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:58.236 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:44:58 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:58.236 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:44:58 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:58.236 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:44:57 vm00 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.2.service: Deactivated successfully. 2026-03-09T18:44:58.236 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:44:57 vm00 systemd[1]: Stopped Ceph osd.2 for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:44:58.236 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:44:58 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:58.236 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:44:58 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:58.236 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:58 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:58.237 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:44:58 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:58.237 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:44:58 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:58.237 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:44:58 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:44:58.487 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:44:58 vm00 systemd[1]: Started Ceph osd.2 for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:44:58.487 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:44:58 vm00 bash[81291]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:44:58.758 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:44:58 vm00 bash[81291]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:44:59.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:58 vm00 bash[65531]: cluster 2026-03-09T18:44:57.573957+0000 mon.a (mon.0) 195 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:58 vm00 bash[65531]: cluster 2026-03-09T18:44:57.573957+0000 mon.a (mon.0) 195 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:58 vm00 bash[65531]: cluster 2026-03-09T18:44:57.585320+0000 mon.a (mon.0) 196 : cluster [DBG] osdmap e107: 8 total, 7 up, 8 in 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:58 vm00 bash[65531]: cluster 2026-03-09T18:44:57.585320+0000 mon.a (mon.0) 196 : cluster [DBG] osdmap e107: 8 total, 7 up, 8 in 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:58 vm00 bash[65531]: cluster 2026-03-09T18:44:58.064890+0000 mgr.y (mgr.44107) 130 : cluster [DBG] pgmap v44: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:58 vm00 bash[65531]: cluster 2026-03-09T18:44:58.064890+0000 mgr.y (mgr.44107) 130 : cluster [DBG] pgmap v44: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:58 vm00 bash[65531]: audit 2026-03-09T18:44:58.108846+0000 mon.a (mon.0) 197 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:58 vm00 bash[65531]: audit 2026-03-09T18:44:58.108846+0000 mon.a (mon.0) 197 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:58 vm00 bash[65531]: audit 2026-03-09T18:44:58.278269+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:58 vm00 bash[65531]: audit 2026-03-09T18:44:58.278269+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:58 vm00 bash[65531]: audit 2026-03-09T18:44:58.287539+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:58 vm00 bash[65531]: audit 2026-03-09T18:44:58.287539+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:58 vm00 bash[65531]: audit 2026-03-09T18:44:58.303014+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:58 vm00 bash[65531]: audit 2026-03-09T18:44:58.303014+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:58 vm00 bash[65531]: audit 2026-03-09T18:44:58.304915+0000 mon.c (mon.1) 144 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:44:58 vm00 bash[65531]: audit 2026-03-09T18:44:58.304915+0000 mon.c (mon.1) 144 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:58 vm00 bash[69512]: cluster 2026-03-09T18:44:57.573957+0000 mon.a (mon.0) 195 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:58 vm00 bash[69512]: cluster 2026-03-09T18:44:57.573957+0000 mon.a (mon.0) 195 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:58 vm00 bash[69512]: cluster 2026-03-09T18:44:57.585320+0000 mon.a (mon.0) 196 : cluster [DBG] osdmap e107: 8 total, 7 up, 8 in 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:58 vm00 bash[69512]: cluster 2026-03-09T18:44:57.585320+0000 mon.a (mon.0) 196 : cluster [DBG] osdmap e107: 8 total, 7 up, 8 in 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:58 vm00 bash[69512]: cluster 2026-03-09T18:44:58.064890+0000 mgr.y (mgr.44107) 130 : cluster [DBG] pgmap v44: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:58 vm00 bash[69512]: cluster 2026-03-09T18:44:58.064890+0000 mgr.y (mgr.44107) 130 : cluster [DBG] pgmap v44: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:58 vm00 bash[69512]: audit 2026-03-09T18:44:58.108846+0000 mon.a (mon.0) 197 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:58 vm00 bash[69512]: audit 2026-03-09T18:44:58.108846+0000 mon.a (mon.0) 197 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:58 vm00 bash[69512]: audit 2026-03-09T18:44:58.278269+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:58 vm00 bash[69512]: audit 2026-03-09T18:44:58.278269+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:58 vm00 bash[69512]: audit 2026-03-09T18:44:58.287539+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:58 vm00 bash[69512]: audit 2026-03-09T18:44:58.287539+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:58 vm00 bash[69512]: audit 2026-03-09T18:44:58.303014+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:58 vm00 bash[69512]: audit 2026-03-09T18:44:58.303014+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:58 vm00 bash[69512]: audit 2026-03-09T18:44:58.304915+0000 mon.c (mon.1) 144 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:59.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:44:58 vm00 bash[69512]: audit 2026-03-09T18:44:58.304915+0000 mon.c (mon.1) 144 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:59.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:58 vm08 bash[46122]: cluster 2026-03-09T18:44:57.573957+0000 mon.a (mon.0) 195 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:44:59.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:58 vm08 bash[46122]: cluster 2026-03-09T18:44:57.573957+0000 mon.a (mon.0) 195 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:44:59.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:58 vm08 bash[46122]: cluster 2026-03-09T18:44:57.585320+0000 mon.a (mon.0) 196 : cluster [DBG] osdmap e107: 8 total, 7 up, 8 in 2026-03-09T18:44:59.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:58 vm08 bash[46122]: cluster 2026-03-09T18:44:57.585320+0000 mon.a (mon.0) 196 : cluster [DBG] osdmap e107: 8 total, 7 up, 8 in 2026-03-09T18:44:59.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:58 vm08 bash[46122]: cluster 2026-03-09T18:44:58.064890+0000 mgr.y (mgr.44107) 130 : cluster [DBG] pgmap v44: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T18:44:59.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:58 vm08 bash[46122]: cluster 2026-03-09T18:44:58.064890+0000 mgr.y (mgr.44107) 130 : cluster [DBG] pgmap v44: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T18:44:59.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:58 vm08 bash[46122]: audit 2026-03-09T18:44:58.108846+0000 mon.a (mon.0) 197 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:59.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:58 vm08 bash[46122]: audit 2026-03-09T18:44:58.108846+0000 mon.a (mon.0) 197 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:59.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:58 vm08 bash[46122]: audit 2026-03-09T18:44:58.278269+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:59.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:58 vm08 bash[46122]: audit 2026-03-09T18:44:58.278269+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:59.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:58 vm08 bash[46122]: audit 2026-03-09T18:44:58.287539+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:59.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:58 vm08 bash[46122]: audit 2026-03-09T18:44:58.287539+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:59.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:58 vm08 bash[46122]: audit 2026-03-09T18:44:58.303014+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:59.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:58 vm08 bash[46122]: audit 2026-03-09T18:44:58.303014+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:44:59.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:58 vm08 bash[46122]: audit 2026-03-09T18:44:58.304915+0000 mon.c (mon.1) 144 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:59.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:44:58 vm08 bash[46122]: audit 2026-03-09T18:44:58.304915+0000 mon.c (mon.1) 144 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:44:59.768 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:44:59 vm00 bash[81291]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T18:44:59.769 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:44:59 vm00 bash[81291]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:44:59.769 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:44:59 vm00 bash[81291]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:44:59.769 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:44:59 vm00 bash[81291]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-09T18:44:59.769 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:44:59 vm00 bash[81291]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-c4db3968-0f23-4e0e-be85-a851860f8acf/osd-block-b6754d4f-0b5b-4d48-8415-b590ff7d2cdb --path /var/lib/ceph/osd/ceph-2 --no-mon-config 2026-03-09T18:44:59.769 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:44:59 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:44:59] "GET /metrics HTTP/1.1" 200 37532 "" "Prometheus/2.51.0" 2026-03-09T18:45:00.077 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:44:59 vm00 bash[81291]: Running command: /usr/bin/ln -snf /dev/ceph-c4db3968-0f23-4e0e-be85-a851860f8acf/osd-block-b6754d4f-0b5b-4d48-8415-b590ff7d2cdb /var/lib/ceph/osd/ceph-2/block 2026-03-09T18:45:00.077 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:44:59 vm00 bash[81291]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block 2026-03-09T18:45:00.077 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:44:59 vm00 bash[81291]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2 2026-03-09T18:45:00.077 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:44:59 vm00 bash[81291]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-09T18:45:00.078 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:44:59 vm00 bash[81291]: --> ceph-volume lvm activate successful for osd ID: 2 2026-03-09T18:45:00.078 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:44:59 vm00 bash[81642]: debug 2026-03-09T18:44:59.948+0000 7ff2e0332640 1 -- 192.168.123.100:0/3162241522 <== mon.0 v2:192.168.123.100:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x5600270cd680 con 0x5600270c6000 2026-03-09T18:45:00.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:00 vm00 bash[65531]: cluster 2026-03-09T18:44:59.154115+0000 mon.a (mon.0) 201 : cluster [DBG] osdmap e108: 8 total, 7 up, 8 in 2026-03-09T18:45:00.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:00 vm00 bash[65531]: cluster 2026-03-09T18:44:59.154115+0000 mon.a (mon.0) 201 : cluster [DBG] osdmap e108: 8 total, 7 up, 8 in 2026-03-09T18:45:00.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:00 vm00 bash[69512]: cluster 2026-03-09T18:44:59.154115+0000 mon.a (mon.0) 201 : cluster [DBG] osdmap e108: 8 total, 7 up, 8 in 2026-03-09T18:45:00.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:00 vm00 bash[69512]: cluster 2026-03-09T18:44:59.154115+0000 mon.a (mon.0) 201 : cluster [DBG] osdmap e108: 8 total, 7 up, 8 in 2026-03-09T18:45:00.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:00 vm08 bash[46122]: cluster 2026-03-09T18:44:59.154115+0000 mon.a (mon.0) 201 : cluster [DBG] osdmap e108: 8 total, 7 up, 8 in 2026-03-09T18:45:00.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:00 vm08 bash[46122]: cluster 2026-03-09T18:44:59.154115+0000 mon.a (mon.0) 201 : cluster [DBG] osdmap e108: 8 total, 7 up, 8 in 2026-03-09T18:45:01.190 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:45:00 vm00 bash[81642]: debug 2026-03-09T18:45:00.900+0000 7ff2e2b9c740 -1 Falling back to public interface 2026-03-09T18:45:01.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:01 vm08 bash[46122]: cluster 2026-03-09T18:45:00.065348+0000 mgr.y (mgr.44107) 131 : cluster [DBG] pgmap v46: 161 pgs: 40 peering, 3 stale+active+clean, 118 active+clean; 457 KiB data, 126 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:45:01.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:01 vm08 bash[46122]: cluster 2026-03-09T18:45:00.065348+0000 mgr.y (mgr.44107) 131 : cluster [DBG] pgmap v46: 161 pgs: 40 peering, 3 stale+active+clean, 118 active+clean; 457 KiB data, 126 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:45:01.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:01 vm08 bash[46122]: cluster 2026-03-09T18:45:00.071288+0000 mon.a (mon.0) 202 : cluster [WRN] Health check failed: Reduced data availability: 7 pgs inactive, 8 pgs peering (PG_AVAILABILITY) 2026-03-09T18:45:01.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:01 vm08 bash[46122]: cluster 2026-03-09T18:45:00.071288+0000 mon.a (mon.0) 202 : cluster [WRN] Health check failed: Reduced data availability: 7 pgs inactive, 8 pgs peering (PG_AVAILABILITY) 2026-03-09T18:45:01.490 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:01 vm00 bash[65531]: cluster 2026-03-09T18:45:00.065348+0000 mgr.y (mgr.44107) 131 : cluster [DBG] pgmap v46: 161 pgs: 40 peering, 3 stale+active+clean, 118 active+clean; 457 KiB data, 126 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:45:01.490 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:01 vm00 bash[65531]: cluster 2026-03-09T18:45:00.065348+0000 mgr.y (mgr.44107) 131 : cluster [DBG] pgmap v46: 161 pgs: 40 peering, 3 stale+active+clean, 118 active+clean; 457 KiB data, 126 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:45:01.490 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:01 vm00 bash[65531]: cluster 2026-03-09T18:45:00.071288+0000 mon.a (mon.0) 202 : cluster [WRN] Health check failed: Reduced data availability: 7 pgs inactive, 8 pgs peering (PG_AVAILABILITY) 2026-03-09T18:45:01.490 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:01 vm00 bash[65531]: cluster 2026-03-09T18:45:00.071288+0000 mon.a (mon.0) 202 : cluster [WRN] Health check failed: Reduced data availability: 7 pgs inactive, 8 pgs peering (PG_AVAILABILITY) 2026-03-09T18:45:01.490 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:01 vm00 bash[69512]: cluster 2026-03-09T18:45:00.065348+0000 mgr.y (mgr.44107) 131 : cluster [DBG] pgmap v46: 161 pgs: 40 peering, 3 stale+active+clean, 118 active+clean; 457 KiB data, 126 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:45:01.490 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:01 vm00 bash[69512]: cluster 2026-03-09T18:45:00.065348+0000 mgr.y (mgr.44107) 131 : cluster [DBG] pgmap v46: 161 pgs: 40 peering, 3 stale+active+clean, 118 active+clean; 457 KiB data, 126 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:45:01.490 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:01 vm00 bash[69512]: cluster 2026-03-09T18:45:00.071288+0000 mon.a (mon.0) 202 : cluster [WRN] Health check failed: Reduced data availability: 7 pgs inactive, 8 pgs peering (PG_AVAILABILITY) 2026-03-09T18:45:01.490 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:01 vm00 bash[69512]: cluster 2026-03-09T18:45:00.071288+0000 mon.a (mon.0) 202 : cluster [WRN] Health check failed: Reduced data availability: 7 pgs inactive, 8 pgs peering (PG_AVAILABILITY) 2026-03-09T18:45:02.129 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:45:01 vm00 bash[81642]: debug 2026-03-09T18:45:01.844+0000 7ff2e2b9c740 -1 osd.2 0 read_superblock omap replica is missing. 2026-03-09T18:45:02.129 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:45:01 vm00 bash[81642]: debug 2026-03-09T18:45:01.872+0000 7ff2e2b9c740 -1 osd.2 106 log_to_monitors true 2026-03-09T18:45:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:02 vm08 bash[46122]: audit 2026-03-09T18:45:01.880966+0000 mon.a (mon.0) 203 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/95519381,v1:192.168.123.100:6819/95519381]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T18:45:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:02 vm08 bash[46122]: audit 2026-03-09T18:45:01.880966+0000 mon.a (mon.0) 203 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/95519381,v1:192.168.123.100:6819/95519381]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T18:45:02.726 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:02 vm00 bash[65531]: audit 2026-03-09T18:45:01.880966+0000 mon.a (mon.0) 203 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/95519381,v1:192.168.123.100:6819/95519381]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T18:45:02.726 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:02 vm00 bash[65531]: audit 2026-03-09T18:45:01.880966+0000 mon.a (mon.0) 203 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/95519381,v1:192.168.123.100:6819/95519381]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T18:45:02.726 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:02 vm00 bash[69512]: audit 2026-03-09T18:45:01.880966+0000 mon.a (mon.0) 203 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/95519381,v1:192.168.123.100:6819/95519381]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T18:45:02.726 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:02 vm00 bash[69512]: audit 2026-03-09T18:45:01.880966+0000 mon.a (mon.0) 203 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/95519381,v1:192.168.123.100:6819/95519381]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T18:45:03.128 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:45:02 vm00 bash[81642]: debug 2026-03-09T18:45:02.728+0000 7ff2da947640 -1 osd.2 106 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:45:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:03 vm08 bash[46122]: audit 2026-03-09T18:45:01.494180+0000 mgr.y (mgr.44107) 132 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:03 vm08 bash[46122]: audit 2026-03-09T18:45:01.494180+0000 mgr.y (mgr.44107) 132 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:03 vm08 bash[46122]: cluster 2026-03-09T18:45:02.065845+0000 mgr.y (mgr.44107) 133 : cluster [DBG] pgmap v47: 161 pgs: 4 active+undersized, 40 peering, 2 active+undersized+degraded, 115 active+clean; 457 KiB data, 126 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 6/627 objects degraded (0.957%) 2026-03-09T18:45:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:03 vm08 bash[46122]: cluster 2026-03-09T18:45:02.065845+0000 mgr.y (mgr.44107) 133 : cluster [DBG] pgmap v47: 161 pgs: 4 active+undersized, 40 peering, 2 active+undersized+degraded, 115 active+clean; 457 KiB data, 126 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 6/627 objects degraded (0.957%) 2026-03-09T18:45:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:03 vm08 bash[46122]: cluster 2026-03-09T18:45:02.308435+0000 mon.a (mon.0) 204 : cluster [WRN] Health check failed: Degraded data redundancy: 6/627 objects degraded (0.957%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T18:45:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:03 vm08 bash[46122]: cluster 2026-03-09T18:45:02.308435+0000 mon.a (mon.0) 204 : cluster [WRN] Health check failed: Degraded data redundancy: 6/627 objects degraded (0.957%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T18:45:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:03 vm08 bash[46122]: audit 2026-03-09T18:45:02.704327+0000 mon.a (mon.0) 205 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/95519381,v1:192.168.123.100:6819/95519381]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T18:45:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:03 vm08 bash[46122]: audit 2026-03-09T18:45:02.704327+0000 mon.a (mon.0) 205 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/95519381,v1:192.168.123.100:6819/95519381]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T18:45:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:03 vm08 bash[46122]: cluster 2026-03-09T18:45:02.711293+0000 mon.a (mon.0) 206 : cluster [DBG] osdmap e109: 8 total, 7 up, 8 in 2026-03-09T18:45:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:03 vm08 bash[46122]: cluster 2026-03-09T18:45:02.711293+0000 mon.a (mon.0) 206 : cluster [DBG] osdmap e109: 8 total, 7 up, 8 in 2026-03-09T18:45:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:03 vm08 bash[46122]: audit 2026-03-09T18:45:02.711630+0000 mon.a (mon.0) 207 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/95519381,v1:192.168.123.100:6819/95519381]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:45:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:03 vm08 bash[46122]: audit 2026-03-09T18:45:02.711630+0000 mon.a (mon.0) 207 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/95519381,v1:192.168.123.100:6819/95519381]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:45:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:03 vm08 bash[46122]: audit 2026-03-09T18:45:03.103547+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:03 vm08 bash[46122]: audit 2026-03-09T18:45:03.103547+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:03 vm08 bash[46122]: audit 2026-03-09T18:45:03.107722+0000 mon.c (mon.1) 145 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:45:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:03 vm08 bash[46122]: audit 2026-03-09T18:45:03.107722+0000 mon.c (mon.1) 145 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:45:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:03 vm00 bash[65531]: audit 2026-03-09T18:45:01.494180+0000 mgr.y (mgr.44107) 132 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:03 vm00 bash[65531]: audit 2026-03-09T18:45:01.494180+0000 mgr.y (mgr.44107) 132 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:03 vm00 bash[65531]: cluster 2026-03-09T18:45:02.065845+0000 mgr.y (mgr.44107) 133 : cluster [DBG] pgmap v47: 161 pgs: 4 active+undersized, 40 peering, 2 active+undersized+degraded, 115 active+clean; 457 KiB data, 126 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 6/627 objects degraded (0.957%) 2026-03-09T18:45:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:03 vm00 bash[65531]: cluster 2026-03-09T18:45:02.065845+0000 mgr.y (mgr.44107) 133 : cluster [DBG] pgmap v47: 161 pgs: 4 active+undersized, 40 peering, 2 active+undersized+degraded, 115 active+clean; 457 KiB data, 126 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 6/627 objects degraded (0.957%) 2026-03-09T18:45:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:03 vm00 bash[65531]: cluster 2026-03-09T18:45:02.308435+0000 mon.a (mon.0) 204 : cluster [WRN] Health check failed: Degraded data redundancy: 6/627 objects degraded (0.957%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T18:45:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:03 vm00 bash[65531]: cluster 2026-03-09T18:45:02.308435+0000 mon.a (mon.0) 204 : cluster [WRN] Health check failed: Degraded data redundancy: 6/627 objects degraded (0.957%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T18:45:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:03 vm00 bash[65531]: audit 2026-03-09T18:45:02.704327+0000 mon.a (mon.0) 205 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/95519381,v1:192.168.123.100:6819/95519381]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T18:45:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:03 vm00 bash[65531]: audit 2026-03-09T18:45:02.704327+0000 mon.a (mon.0) 205 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/95519381,v1:192.168.123.100:6819/95519381]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T18:45:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:03 vm00 bash[65531]: cluster 2026-03-09T18:45:02.711293+0000 mon.a (mon.0) 206 : cluster [DBG] osdmap e109: 8 total, 7 up, 8 in 2026-03-09T18:45:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:03 vm00 bash[65531]: cluster 2026-03-09T18:45:02.711293+0000 mon.a (mon.0) 206 : cluster [DBG] osdmap e109: 8 total, 7 up, 8 in 2026-03-09T18:45:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:03 vm00 bash[65531]: audit 2026-03-09T18:45:02.711630+0000 mon.a (mon.0) 207 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/95519381,v1:192.168.123.100:6819/95519381]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:45:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:03 vm00 bash[65531]: audit 2026-03-09T18:45:02.711630+0000 mon.a (mon.0) 207 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/95519381,v1:192.168.123.100:6819/95519381]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:45:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:03 vm00 bash[65531]: audit 2026-03-09T18:45:03.103547+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:03 vm00 bash[65531]: audit 2026-03-09T18:45:03.103547+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:03 vm00 bash[65531]: audit 2026-03-09T18:45:03.107722+0000 mon.c (mon.1) 145 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:45:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:03 vm00 bash[65531]: audit 2026-03-09T18:45:03.107722+0000 mon.c (mon.1) 145 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:45:03.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:03 vm00 bash[69512]: audit 2026-03-09T18:45:01.494180+0000 mgr.y (mgr.44107) 132 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:03.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:03 vm00 bash[69512]: audit 2026-03-09T18:45:01.494180+0000 mgr.y (mgr.44107) 132 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:03.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:03 vm00 bash[69512]: cluster 2026-03-09T18:45:02.065845+0000 mgr.y (mgr.44107) 133 : cluster [DBG] pgmap v47: 161 pgs: 4 active+undersized, 40 peering, 2 active+undersized+degraded, 115 active+clean; 457 KiB data, 126 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 6/627 objects degraded (0.957%) 2026-03-09T18:45:03.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:03 vm00 bash[69512]: cluster 2026-03-09T18:45:02.065845+0000 mgr.y (mgr.44107) 133 : cluster [DBG] pgmap v47: 161 pgs: 4 active+undersized, 40 peering, 2 active+undersized+degraded, 115 active+clean; 457 KiB data, 126 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 6/627 objects degraded (0.957%) 2026-03-09T18:45:03.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:03 vm00 bash[69512]: cluster 2026-03-09T18:45:02.308435+0000 mon.a (mon.0) 204 : cluster [WRN] Health check failed: Degraded data redundancy: 6/627 objects degraded (0.957%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T18:45:03.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:03 vm00 bash[69512]: cluster 2026-03-09T18:45:02.308435+0000 mon.a (mon.0) 204 : cluster [WRN] Health check failed: Degraded data redundancy: 6/627 objects degraded (0.957%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T18:45:03.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:03 vm00 bash[69512]: audit 2026-03-09T18:45:02.704327+0000 mon.a (mon.0) 205 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/95519381,v1:192.168.123.100:6819/95519381]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T18:45:03.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:03 vm00 bash[69512]: audit 2026-03-09T18:45:02.704327+0000 mon.a (mon.0) 205 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/95519381,v1:192.168.123.100:6819/95519381]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T18:45:03.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:03 vm00 bash[69512]: cluster 2026-03-09T18:45:02.711293+0000 mon.a (mon.0) 206 : cluster [DBG] osdmap e109: 8 total, 7 up, 8 in 2026-03-09T18:45:03.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:03 vm00 bash[69512]: cluster 2026-03-09T18:45:02.711293+0000 mon.a (mon.0) 206 : cluster [DBG] osdmap e109: 8 total, 7 up, 8 in 2026-03-09T18:45:03.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:03 vm00 bash[69512]: audit 2026-03-09T18:45:02.711630+0000 mon.a (mon.0) 207 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/95519381,v1:192.168.123.100:6819/95519381]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:45:03.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:03 vm00 bash[69512]: audit 2026-03-09T18:45:02.711630+0000 mon.a (mon.0) 207 : audit [INF] from='osd.2 [v2:192.168.123.100:6818/95519381,v1:192.168.123.100:6819/95519381]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:45:03.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:03 vm00 bash[69512]: audit 2026-03-09T18:45:03.103547+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:03.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:03 vm00 bash[69512]: audit 2026-03-09T18:45:03.103547+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:03.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:03 vm00 bash[69512]: audit 2026-03-09T18:45:03.107722+0000 mon.c (mon.1) 145 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:45:03.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:03 vm00 bash[69512]: audit 2026-03-09T18:45:03.107722+0000 mon.c (mon.1) 145 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:45:04.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:04 vm08 bash[46122]: cluster 2026-03-09T18:45:03.704674+0000 mon.a (mon.0) 209 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:45:04.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:04 vm08 bash[46122]: cluster 2026-03-09T18:45:03.704674+0000 mon.a (mon.0) 209 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:45:04.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:04 vm08 bash[46122]: cluster 2026-03-09T18:45:03.738255+0000 mon.a (mon.0) 210 : cluster [INF] osd.2 [v2:192.168.123.100:6818/95519381,v1:192.168.123.100:6819/95519381] boot 2026-03-09T18:45:04.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:04 vm08 bash[46122]: cluster 2026-03-09T18:45:03.738255+0000 mon.a (mon.0) 210 : cluster [INF] osd.2 [v2:192.168.123.100:6818/95519381,v1:192.168.123.100:6819/95519381] boot 2026-03-09T18:45:04.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:04 vm08 bash[46122]: cluster 2026-03-09T18:45:03.738460+0000 mon.a (mon.0) 211 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-09T18:45:04.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:04 vm08 bash[46122]: cluster 2026-03-09T18:45:03.738460+0000 mon.a (mon.0) 211 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-09T18:45:04.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:04 vm08 bash[46122]: audit 2026-03-09T18:45:03.741698+0000 mon.c (mon.1) 146 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:45:04.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:04 vm08 bash[46122]: audit 2026-03-09T18:45:03.741698+0000 mon.c (mon.1) 146 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:45:04.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:04 vm08 bash[46122]: cluster 2026-03-09T18:45:04.066279+0000 mgr.y (mgr.44107) 134 : cluster [DBG] pgmap v50: 161 pgs: 4 active+undersized, 40 peering, 2 active+undersized+degraded, 115 active+clean; 457 KiB data, 144 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 6/627 objects degraded (0.957%) 2026-03-09T18:45:04.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:04 vm08 bash[46122]: cluster 2026-03-09T18:45:04.066279+0000 mgr.y (mgr.44107) 134 : cluster [DBG] pgmap v50: 161 pgs: 4 active+undersized, 40 peering, 2 active+undersized+degraded, 115 active+clean; 457 KiB data, 144 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 6/627 objects degraded (0.957%) 2026-03-09T18:45:05.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:04 vm00 bash[69512]: cluster 2026-03-09T18:45:03.704674+0000 mon.a (mon.0) 209 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:45:05.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:04 vm00 bash[69512]: cluster 2026-03-09T18:45:03.704674+0000 mon.a (mon.0) 209 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:45:05.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:04 vm00 bash[69512]: cluster 2026-03-09T18:45:03.738255+0000 mon.a (mon.0) 210 : cluster [INF] osd.2 [v2:192.168.123.100:6818/95519381,v1:192.168.123.100:6819/95519381] boot 2026-03-09T18:45:05.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:04 vm00 bash[69512]: cluster 2026-03-09T18:45:03.738255+0000 mon.a (mon.0) 210 : cluster [INF] osd.2 [v2:192.168.123.100:6818/95519381,v1:192.168.123.100:6819/95519381] boot 2026-03-09T18:45:05.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:04 vm00 bash[69512]: cluster 2026-03-09T18:45:03.738460+0000 mon.a (mon.0) 211 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-09T18:45:05.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:04 vm00 bash[69512]: cluster 2026-03-09T18:45:03.738460+0000 mon.a (mon.0) 211 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-09T18:45:05.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:04 vm00 bash[69512]: audit 2026-03-09T18:45:03.741698+0000 mon.c (mon.1) 146 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:45:05.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:04 vm00 bash[69512]: audit 2026-03-09T18:45:03.741698+0000 mon.c (mon.1) 146 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:45:05.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:04 vm00 bash[69512]: cluster 2026-03-09T18:45:04.066279+0000 mgr.y (mgr.44107) 134 : cluster [DBG] pgmap v50: 161 pgs: 4 active+undersized, 40 peering, 2 active+undersized+degraded, 115 active+clean; 457 KiB data, 144 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 6/627 objects degraded (0.957%) 2026-03-09T18:45:05.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:04 vm00 bash[69512]: cluster 2026-03-09T18:45:04.066279+0000 mgr.y (mgr.44107) 134 : cluster [DBG] pgmap v50: 161 pgs: 4 active+undersized, 40 peering, 2 active+undersized+degraded, 115 active+clean; 457 KiB data, 144 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 6/627 objects degraded (0.957%) 2026-03-09T18:45:05.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:04 vm00 bash[65531]: cluster 2026-03-09T18:45:03.704674+0000 mon.a (mon.0) 209 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:45:05.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:04 vm00 bash[65531]: cluster 2026-03-09T18:45:03.704674+0000 mon.a (mon.0) 209 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:45:05.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:04 vm00 bash[65531]: cluster 2026-03-09T18:45:03.738255+0000 mon.a (mon.0) 210 : cluster [INF] osd.2 [v2:192.168.123.100:6818/95519381,v1:192.168.123.100:6819/95519381] boot 2026-03-09T18:45:05.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:04 vm00 bash[65531]: cluster 2026-03-09T18:45:03.738255+0000 mon.a (mon.0) 210 : cluster [INF] osd.2 [v2:192.168.123.100:6818/95519381,v1:192.168.123.100:6819/95519381] boot 2026-03-09T18:45:05.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:04 vm00 bash[65531]: cluster 2026-03-09T18:45:03.738460+0000 mon.a (mon.0) 211 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-09T18:45:05.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:04 vm00 bash[65531]: cluster 2026-03-09T18:45:03.738460+0000 mon.a (mon.0) 211 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-09T18:45:05.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:04 vm00 bash[65531]: audit 2026-03-09T18:45:03.741698+0000 mon.c (mon.1) 146 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:45:05.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:04 vm00 bash[65531]: audit 2026-03-09T18:45:03.741698+0000 mon.c (mon.1) 146 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T18:45:05.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:04 vm00 bash[65531]: cluster 2026-03-09T18:45:04.066279+0000 mgr.y (mgr.44107) 134 : cluster [DBG] pgmap v50: 161 pgs: 4 active+undersized, 40 peering, 2 active+undersized+degraded, 115 active+clean; 457 KiB data, 144 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 6/627 objects degraded (0.957%) 2026-03-09T18:45:05.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:04 vm00 bash[65531]: cluster 2026-03-09T18:45:04.066279+0000 mgr.y (mgr.44107) 134 : cluster [DBG] pgmap v50: 161 pgs: 4 active+undersized, 40 peering, 2 active+undersized+degraded, 115 active+clean; 457 KiB data, 144 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 6/627 objects degraded (0.957%) 2026-03-09T18:45:06.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:05 vm00 bash[69512]: cluster 2026-03-09T18:45:04.814021+0000 mon.a (mon.0) 212 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T18:45:06.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:05 vm00 bash[69512]: cluster 2026-03-09T18:45:04.814021+0000 mon.a (mon.0) 212 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T18:45:06.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:05 vm00 bash[69512]: audit 2026-03-09T18:45:04.854853+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:06.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:05 vm00 bash[69512]: audit 2026-03-09T18:45:04.854853+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:06.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:05 vm00 bash[69512]: audit 2026-03-09T18:45:04.864385+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:06.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:05 vm00 bash[69512]: audit 2026-03-09T18:45:04.864385+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:06.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:05 vm00 bash[69512]: audit 2026-03-09T18:45:05.494379+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:06.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:05 vm00 bash[69512]: audit 2026-03-09T18:45:05.494379+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:06.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:05 vm00 bash[69512]: audit 2026-03-09T18:45:05.500125+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:06.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:05 vm00 bash[69512]: audit 2026-03-09T18:45:05.500125+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:06.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:05 vm00 bash[65531]: cluster 2026-03-09T18:45:04.814021+0000 mon.a (mon.0) 212 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T18:45:06.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:05 vm00 bash[65531]: cluster 2026-03-09T18:45:04.814021+0000 mon.a (mon.0) 212 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T18:45:06.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:05 vm00 bash[65531]: audit 2026-03-09T18:45:04.854853+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:06.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:05 vm00 bash[65531]: audit 2026-03-09T18:45:04.854853+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:06.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:05 vm00 bash[65531]: audit 2026-03-09T18:45:04.864385+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:06.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:05 vm00 bash[65531]: audit 2026-03-09T18:45:04.864385+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:06.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:05 vm00 bash[65531]: audit 2026-03-09T18:45:05.494379+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:06.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:05 vm00 bash[65531]: audit 2026-03-09T18:45:05.494379+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:06.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:05 vm00 bash[65531]: audit 2026-03-09T18:45:05.500125+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:06.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:05 vm00 bash[65531]: audit 2026-03-09T18:45:05.500125+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:06.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:05 vm08 bash[46122]: cluster 2026-03-09T18:45:04.814021+0000 mon.a (mon.0) 212 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T18:45:06.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:05 vm08 bash[46122]: cluster 2026-03-09T18:45:04.814021+0000 mon.a (mon.0) 212 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T18:45:06.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:05 vm08 bash[46122]: audit 2026-03-09T18:45:04.854853+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:06.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:05 vm08 bash[46122]: audit 2026-03-09T18:45:04.854853+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:06.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:05 vm08 bash[46122]: audit 2026-03-09T18:45:04.864385+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:06.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:05 vm08 bash[46122]: audit 2026-03-09T18:45:04.864385+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:06.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:05 vm08 bash[46122]: audit 2026-03-09T18:45:05.494379+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:06.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:05 vm08 bash[46122]: audit 2026-03-09T18:45:05.494379+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:06.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:05 vm08 bash[46122]: audit 2026-03-09T18:45:05.500125+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:06.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:05 vm08 bash[46122]: audit 2026-03-09T18:45:05.500125+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:07.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:06 vm00 bash[65531]: cluster 2026-03-09T18:45:06.066842+0000 mgr.y (mgr.44107) 135 : cluster [DBG] pgmap v52: 161 pgs: 2 active+undersized, 40 peering, 1 active+undersized+degraded, 118 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s; 1/627 objects degraded (0.159%) 2026-03-09T18:45:07.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:06 vm00 bash[65531]: cluster 2026-03-09T18:45:06.066842+0000 mgr.y (mgr.44107) 135 : cluster [DBG] pgmap v52: 161 pgs: 2 active+undersized, 40 peering, 1 active+undersized+degraded, 118 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s; 1/627 objects degraded (0.159%) 2026-03-09T18:45:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:06 vm00 bash[69512]: cluster 2026-03-09T18:45:06.066842+0000 mgr.y (mgr.44107) 135 : cluster [DBG] pgmap v52: 161 pgs: 2 active+undersized, 40 peering, 1 active+undersized+degraded, 118 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s; 1/627 objects degraded (0.159%) 2026-03-09T18:45:07.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:06 vm00 bash[69512]: cluster 2026-03-09T18:45:06.066842+0000 mgr.y (mgr.44107) 135 : cluster [DBG] pgmap v52: 161 pgs: 2 active+undersized, 40 peering, 1 active+undersized+degraded, 118 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s; 1/627 objects degraded (0.159%) 2026-03-09T18:45:07.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:06 vm08 bash[46122]: cluster 2026-03-09T18:45:06.066842+0000 mgr.y (mgr.44107) 135 : cluster [DBG] pgmap v52: 161 pgs: 2 active+undersized, 40 peering, 1 active+undersized+degraded, 118 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s; 1/627 objects degraded (0.159%) 2026-03-09T18:45:07.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:06 vm08 bash[46122]: cluster 2026-03-09T18:45:06.066842+0000 mgr.y (mgr.44107) 135 : cluster [DBG] pgmap v52: 161 pgs: 2 active+undersized, 40 peering, 1 active+undersized+degraded, 118 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s; 1/627 objects degraded (0.159%) 2026-03-09T18:45:09.110 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:45:09.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:08 vm00 bash[65531]: cluster 2026-03-09T18:45:08.118212+0000 mon.a (mon.0) 217 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 7 pgs inactive, 8 pgs peering) 2026-03-09T18:45:09.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:08 vm00 bash[65531]: cluster 2026-03-09T18:45:08.118212+0000 mon.a (mon.0) 217 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 7 pgs inactive, 8 pgs peering) 2026-03-09T18:45:09.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:08 vm00 bash[65531]: cluster 2026-03-09T18:45:08.118244+0000 mon.a (mon.0) 218 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1/627 objects degraded (0.159%), 1 pg degraded) 2026-03-09T18:45:09.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:08 vm00 bash[65531]: cluster 2026-03-09T18:45:08.118244+0000 mon.a (mon.0) 218 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1/627 objects degraded (0.159%), 1 pg degraded) 2026-03-09T18:45:09.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:08 vm00 bash[65531]: cluster 2026-03-09T18:45:08.118253+0000 mon.a (mon.0) 219 : cluster [INF] Cluster is now healthy 2026-03-09T18:45:09.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:08 vm00 bash[65531]: cluster 2026-03-09T18:45:08.118253+0000 mon.a (mon.0) 219 : cluster [INF] Cluster is now healthy 2026-03-09T18:45:09.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:08 vm00 bash[69512]: cluster 2026-03-09T18:45:08.118212+0000 mon.a (mon.0) 217 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 7 pgs inactive, 8 pgs peering) 2026-03-09T18:45:09.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:08 vm00 bash[69512]: cluster 2026-03-09T18:45:08.118212+0000 mon.a (mon.0) 217 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 7 pgs inactive, 8 pgs peering) 2026-03-09T18:45:09.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:08 vm00 bash[69512]: cluster 2026-03-09T18:45:08.118244+0000 mon.a (mon.0) 218 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1/627 objects degraded (0.159%), 1 pg degraded) 2026-03-09T18:45:09.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:08 vm00 bash[69512]: cluster 2026-03-09T18:45:08.118244+0000 mon.a (mon.0) 218 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1/627 objects degraded (0.159%), 1 pg degraded) 2026-03-09T18:45:09.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:08 vm00 bash[69512]: cluster 2026-03-09T18:45:08.118253+0000 mon.a (mon.0) 219 : cluster [INF] Cluster is now healthy 2026-03-09T18:45:09.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:08 vm00 bash[69512]: cluster 2026-03-09T18:45:08.118253+0000 mon.a (mon.0) 219 : cluster [INF] Cluster is now healthy 2026-03-09T18:45:09.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:08 vm08 bash[46122]: cluster 2026-03-09T18:45:08.118212+0000 mon.a (mon.0) 217 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 7 pgs inactive, 8 pgs peering) 2026-03-09T18:45:09.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:08 vm08 bash[46122]: cluster 2026-03-09T18:45:08.118212+0000 mon.a (mon.0) 217 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 7 pgs inactive, 8 pgs peering) 2026-03-09T18:45:09.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:08 vm08 bash[46122]: cluster 2026-03-09T18:45:08.118244+0000 mon.a (mon.0) 218 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1/627 objects degraded (0.159%), 1 pg degraded) 2026-03-09T18:45:09.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:08 vm08 bash[46122]: cluster 2026-03-09T18:45:08.118244+0000 mon.a (mon.0) 218 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1/627 objects degraded (0.159%), 1 pg degraded) 2026-03-09T18:45:09.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:08 vm08 bash[46122]: cluster 2026-03-09T18:45:08.118253+0000 mon.a (mon.0) 219 : cluster [INF] Cluster is now healthy 2026-03-09T18:45:09.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:08 vm08 bash[46122]: cluster 2026-03-09T18:45:08.118253+0000 mon.a (mon.0) 219 : cluster [INF] Cluster is now healthy 2026-03-09T18:45:09.528 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:45:09.528 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (15m) 4s ago 22m 14.5M - 0.25.0 c8568f914cd2 2a8a29ecee54 2026-03-09T18:45:09.528 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (2m) 75s ago 22m 65.0M - 10.4.0 c8b91775d855 5e0e30d27ab2 2026-03-09T18:45:09.528 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (3m) 4s ago 21m 43.8M - 3.5 e1d6a67b021e ff3da66cebe9 2026-03-09T18:45:09.528 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443,9283,8765 running (3m) 75s ago 25m 462M - 19.2.3-678-ge911bdeb 654f31e6858e e51f08afe84e 2026-03-09T18:45:09.528 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:8443,9283,8765 running (12m) 4s ago 25m 525M - 19.2.3-678-ge911bdeb 654f31e6858e 838266ced7b2 2026-03-09T18:45:09.528 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (85s) 4s ago 25m 45.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e eb9fca83668a 2026-03-09T18:45:09.528 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (2m) 75s ago 25m 37.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a343d2673f4 2026-03-09T18:45:09.528 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (99s) 4s ago 25m 44.5M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 5c50cbcbef6b 2026-03-09T18:45:09.528 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (15m) 4s ago 22m 7891k - 1.7.0 72c9c2088986 c2e3e3202fde 2026-03-09T18:45:09.528 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (15m) 75s ago 22m 7956k - 1.7.0 72c9c2088986 7a7d1ed8c801 2026-03-09T18:45:09.528 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (24m) 4s ago 24m 53.7M 4096M 17.2.0 e1d6a67b021e ab692a994bc3 2026-03-09T18:45:09.528 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (24m) 4s ago 24m 56.7M 4096M 17.2.0 e1d6a67b021e d8607e6df9d6 2026-03-09T18:45:09.528 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (9s) 4s ago 24m 21.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9a838e294e64 2026-03-09T18:45:09.528 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (26s) 4s ago 24m 66.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 161fbb574888 2026-03-09T18:45:09.528 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (23m) 75s ago 23m 54.9M 4096M 17.2.0 e1d6a67b021e 781045b06a16 2026-03-09T18:45:09.528 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (23m) 75s ago 23m 53.9M 4096M 17.2.0 e1d6a67b021e 28b71e1b7c1b 2026-03-09T18:45:09.528 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (23m) 75s ago 23m 52.7M 4096M 17.2.0 e1d6a67b021e 80e1a58dd2f5 2026-03-09T18:45:09.528 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (22m) 75s ago 22m 52.2M 4096M 17.2.0 e1d6a67b021e 4f91765b51cf 2026-03-09T18:45:09.528 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (3m) 75s ago 22m 41.3M - 2.51.0 1d3b7f56885b 2dc1789f7bf0 2026-03-09T18:45:09.528 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (21m) 4s ago 21m 89.1M - 17.2.0 e1d6a67b021e 671fa80b7e00 2026-03-09T18:45:09.528 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (21m) 75s ago 21m 89.6M - 17.2.0 e1d6a67b021e 1fbcce983317 2026-03-09T18:45:09.773 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:45:09.773 INFO:teuthology.orchestra.run.vm00.stdout: "mon": { 2026-03-09T18:45:09.773 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-09T18:45:09.773 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:45:09.773 INFO:teuthology.orchestra.run.vm00.stdout: "mgr": { 2026-03-09T18:45:09.773 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:45:09.773 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:45:09.773 INFO:teuthology.orchestra.run.vm00.stdout: "osd": { 2026-03-09T18:45:09.773 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 6, 2026-03-09T18:45:09.773 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:45:09.773 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:45:09.773 INFO:teuthology.orchestra.run.vm00.stdout: "rgw": { 2026-03-09T18:45:09.773 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-09T18:45:09.773 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:45:09.773 INFO:teuthology.orchestra.run.vm00.stdout: "overall": { 2026-03-09T18:45:09.773 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8, 2026-03-09T18:45:09.773 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 7 2026-03-09T18:45:09.773 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:45:09.773 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:45:09.826 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:45:09 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:45:09] "GET /metrics HTTP/1.1" 200 37641 "" "Prometheus/2.51.0" 2026-03-09T18:45:10.003 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:45:10.003 INFO:teuthology.orchestra.run.vm00.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T18:45:10.003 INFO:teuthology.orchestra.run.vm00.stdout: "in_progress": true, 2026-03-09T18:45:10.003 INFO:teuthology.orchestra.run.vm00.stdout: "which": "Upgrading daemons of type(s) osd. Upgrade limited to 2 daemons (0 remaining).", 2026-03-09T18:45:10.003 INFO:teuthology.orchestra.run.vm00.stdout: "services_complete": [], 2026-03-09T18:45:10.003 INFO:teuthology.orchestra.run.vm00.stdout: "progress": "2/8 daemons upgraded", 2026-03-09T18:45:10.003 INFO:teuthology.orchestra.run.vm00.stdout: "message": "Currently upgrading osd daemons", 2026-03-09T18:45:10.003 INFO:teuthology.orchestra.run.vm00.stdout: "is_paused": false 2026-03-09T18:45:10.003 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:45:10.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:09 vm00 bash[65531]: cluster 2026-03-09T18:45:08.067220+0000 mgr.y (mgr.44107) 136 : cluster [DBG] pgmap v53: 161 pgs: 30 peering, 131 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:10.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:09 vm00 bash[65531]: cluster 2026-03-09T18:45:08.067220+0000 mgr.y (mgr.44107) 136 : cluster [DBG] pgmap v53: 161 pgs: 30 peering, 131 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:10.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:09 vm00 bash[65531]: audit 2026-03-09T18:45:09.101996+0000 mgr.y (mgr.44107) 137 : audit [DBG] from='client.44233 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:10.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:09 vm00 bash[65531]: audit 2026-03-09T18:45:09.101996+0000 mgr.y (mgr.44107) 137 : audit [DBG] from='client.44233 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:10.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:09 vm00 bash[65531]: audit 2026-03-09T18:45:09.318048+0000 mgr.y (mgr.44107) 138 : audit [DBG] from='client.54246 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:10.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:09 vm00 bash[65531]: audit 2026-03-09T18:45:09.318048+0000 mgr.y (mgr.44107) 138 : audit [DBG] from='client.54246 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:10.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:09 vm00 bash[65531]: audit 2026-03-09T18:45:09.776286+0000 mon.a (mon.0) 220 : audit [DBG] from='client.? 192.168.123.100:0/1489701075' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:10.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:09 vm00 bash[65531]: audit 2026-03-09T18:45:09.776286+0000 mon.a (mon.0) 220 : audit [DBG] from='client.? 192.168.123.100:0/1489701075' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:10.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:09 vm00 bash[69512]: cluster 2026-03-09T18:45:08.067220+0000 mgr.y (mgr.44107) 136 : cluster [DBG] pgmap v53: 161 pgs: 30 peering, 131 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:10.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:09 vm00 bash[69512]: cluster 2026-03-09T18:45:08.067220+0000 mgr.y (mgr.44107) 136 : cluster [DBG] pgmap v53: 161 pgs: 30 peering, 131 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:10.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:09 vm00 bash[69512]: audit 2026-03-09T18:45:09.101996+0000 mgr.y (mgr.44107) 137 : audit [DBG] from='client.44233 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:10.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:09 vm00 bash[69512]: audit 2026-03-09T18:45:09.101996+0000 mgr.y (mgr.44107) 137 : audit [DBG] from='client.44233 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:10.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:09 vm00 bash[69512]: audit 2026-03-09T18:45:09.318048+0000 mgr.y (mgr.44107) 138 : audit [DBG] from='client.54246 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:10.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:09 vm00 bash[69512]: audit 2026-03-09T18:45:09.318048+0000 mgr.y (mgr.44107) 138 : audit [DBG] from='client.54246 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:10.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:09 vm00 bash[69512]: audit 2026-03-09T18:45:09.776286+0000 mon.a (mon.0) 220 : audit [DBG] from='client.? 192.168.123.100:0/1489701075' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:10.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:09 vm00 bash[69512]: audit 2026-03-09T18:45:09.776286+0000 mon.a (mon.0) 220 : audit [DBG] from='client.? 192.168.123.100:0/1489701075' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:10.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:09 vm08 bash[46122]: cluster 2026-03-09T18:45:08.067220+0000 mgr.y (mgr.44107) 136 : cluster [DBG] pgmap v53: 161 pgs: 30 peering, 131 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:10.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:09 vm08 bash[46122]: cluster 2026-03-09T18:45:08.067220+0000 mgr.y (mgr.44107) 136 : cluster [DBG] pgmap v53: 161 pgs: 30 peering, 131 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:10.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:09 vm08 bash[46122]: audit 2026-03-09T18:45:09.101996+0000 mgr.y (mgr.44107) 137 : audit [DBG] from='client.44233 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:10.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:09 vm08 bash[46122]: audit 2026-03-09T18:45:09.101996+0000 mgr.y (mgr.44107) 137 : audit [DBG] from='client.44233 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:10.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:09 vm08 bash[46122]: audit 2026-03-09T18:45:09.318048+0000 mgr.y (mgr.44107) 138 : audit [DBG] from='client.54246 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:10.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:09 vm08 bash[46122]: audit 2026-03-09T18:45:09.318048+0000 mgr.y (mgr.44107) 138 : audit [DBG] from='client.54246 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:10.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:09 vm08 bash[46122]: audit 2026-03-09T18:45:09.776286+0000 mon.a (mon.0) 220 : audit [DBG] from='client.? 192.168.123.100:0/1489701075' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:10.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:09 vm08 bash[46122]: audit 2026-03-09T18:45:09.776286+0000 mon.a (mon.0) 220 : audit [DBG] from='client.? 192.168.123.100:0/1489701075' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:11.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:10 vm00 bash[65531]: audit 2026-03-09T18:45:09.525938+0000 mgr.y (mgr.44107) 139 : audit [DBG] from='client.44242 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:11.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:10 vm00 bash[65531]: audit 2026-03-09T18:45:09.525938+0000 mgr.y (mgr.44107) 139 : audit [DBG] from='client.44242 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:11.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:10 vm00 bash[65531]: audit 2026-03-09T18:45:10.006568+0000 mgr.y (mgr.44107) 140 : audit [DBG] from='client.54258 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:11.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:10 vm00 bash[65531]: audit 2026-03-09T18:45:10.006568+0000 mgr.y (mgr.44107) 140 : audit [DBG] from='client.54258 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:11.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:10 vm00 bash[65531]: cluster 2026-03-09T18:45:10.067759+0000 mgr.y (mgr.44107) 141 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 695 B/s rd, 0 op/s 2026-03-09T18:45:11.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:10 vm00 bash[65531]: cluster 2026-03-09T18:45:10.067759+0000 mgr.y (mgr.44107) 141 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 695 B/s rd, 0 op/s 2026-03-09T18:45:11.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:10 vm00 bash[69512]: audit 2026-03-09T18:45:09.525938+0000 mgr.y (mgr.44107) 139 : audit [DBG] from='client.44242 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:11.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:10 vm00 bash[69512]: audit 2026-03-09T18:45:09.525938+0000 mgr.y (mgr.44107) 139 : audit [DBG] from='client.44242 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:11.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:10 vm00 bash[69512]: audit 2026-03-09T18:45:10.006568+0000 mgr.y (mgr.44107) 140 : audit [DBG] from='client.54258 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:11.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:10 vm00 bash[69512]: audit 2026-03-09T18:45:10.006568+0000 mgr.y (mgr.44107) 140 : audit [DBG] from='client.54258 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:11.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:10 vm00 bash[69512]: cluster 2026-03-09T18:45:10.067759+0000 mgr.y (mgr.44107) 141 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 695 B/s rd, 0 op/s 2026-03-09T18:45:11.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:10 vm00 bash[69512]: cluster 2026-03-09T18:45:10.067759+0000 mgr.y (mgr.44107) 141 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 695 B/s rd, 0 op/s 2026-03-09T18:45:11.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:10 vm08 bash[46122]: audit 2026-03-09T18:45:09.525938+0000 mgr.y (mgr.44107) 139 : audit [DBG] from='client.44242 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:11.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:10 vm08 bash[46122]: audit 2026-03-09T18:45:09.525938+0000 mgr.y (mgr.44107) 139 : audit [DBG] from='client.44242 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:11.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:10 vm08 bash[46122]: audit 2026-03-09T18:45:10.006568+0000 mgr.y (mgr.44107) 140 : audit [DBG] from='client.54258 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:11.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:10 vm08 bash[46122]: audit 2026-03-09T18:45:10.006568+0000 mgr.y (mgr.44107) 140 : audit [DBG] from='client.54258 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:11.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:10 vm08 bash[46122]: cluster 2026-03-09T18:45:10.067759+0000 mgr.y (mgr.44107) 141 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 695 B/s rd, 0 op/s 2026-03-09T18:45:11.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:10 vm08 bash[46122]: cluster 2026-03-09T18:45:10.067759+0000 mgr.y (mgr.44107) 141 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 695 B/s rd, 0 op/s 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.452402+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.452402+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.460977+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.460977+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.464992+0000 mon.c (mon.1) 147 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.464992+0000 mon.c (mon.1) 147 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.466165+0000 mon.c (mon.1) 148 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.466165+0000 mon.c (mon.1) 148 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.478046+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.478046+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.501906+0000 mgr.y (mgr.44107) 142 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.501906+0000 mgr.y (mgr.44107) 142 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.523704+0000 mon.c (mon.1) 149 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.523704+0000 mon.c (mon.1) 149 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.525177+0000 mon.c (mon.1) 150 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.525177+0000 mon.c (mon.1) 150 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.526574+0000 mon.c (mon.1) 151 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.526574+0000 mon.c (mon.1) 151 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: cephadm 2026-03-09T18:45:11.527299+0000 mgr.y (mgr.44107) 143 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: cephadm 2026-03-09T18:45:11.527299+0000 mgr.y (mgr.44107) 143 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.532489+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.532489+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.535568+0000 mon.c (mon.1) 152 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.535568+0000 mon.c (mon.1) 152 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.536764+0000 mon.c (mon.1) 153 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.536764+0000 mon.c (mon.1) 153 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.537747+0000 mon.c (mon.1) 154 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.537747+0000 mon.c (mon.1) 154 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.538663+0000 mon.c (mon.1) 155 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.538663+0000 mon.c (mon.1) 155 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.539545+0000 mon.c (mon.1) 156 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.539545+0000 mon.c (mon.1) 156 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.540515+0000 mon.c (mon.1) 157 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.540515+0000 mon.c (mon.1) 157 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: cephadm 2026-03-09T18:45:11.541180+0000 mgr.y (mgr.44107) 144 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: cephadm 2026-03-09T18:45:11.541180+0000 mgr.y (mgr.44107) 144 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.542058+0000 mon.c (mon.1) 158 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:45:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.542058+0000 mon.c (mon.1) 158 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.542363+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.542363+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.546557+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.546557+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.548585+0000 mon.c (mon.1) 159 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.548585+0000 mon.c (mon.1) 159 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.549304+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.549304+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.553467+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.553467+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.555626+0000 mon.c (mon.1) 160 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.555626+0000 mon.c (mon.1) 160 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.555825+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.555825+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.559470+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.559470+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.561435+0000 mon.c (mon.1) 161 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.561435+0000 mon.c (mon.1) 161 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.561639+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.561639+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.562087+0000 mon.c (mon.1) 162 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.562087+0000 mon.c (mon.1) 162 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.562253+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.562253+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.562676+0000 mon.c (mon.1) 163 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.562676+0000 mon.c (mon.1) 163 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.562860+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.562860+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.563552+0000 mon.c (mon.1) 164 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.563552+0000 mon.c (mon.1) 164 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.563737+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.563737+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.564268+0000 mon.c (mon.1) 165 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.564268+0000 mon.c (mon.1) 165 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.564463+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.564463+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.564987+0000 mon.c (mon.1) 166 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.564987+0000 mon.c (mon.1) 166 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.565191+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.565191+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.565736+0000 mon.c (mon.1) 167 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.565736+0000 mon.c (mon.1) 167 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.565929+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.565929+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.567064+0000 mon.c (mon.1) 168 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.567064+0000 mon.c (mon.1) 168 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:45:12.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.567230+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.567230+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.567777+0000 mon.c (mon.1) 169 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.567777+0000 mon.c (mon.1) 169 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.567977+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.567977+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.571854+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.571854+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.573556+0000 mon.c (mon.1) 170 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.573556+0000 mon.c (mon.1) 170 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.573734+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.573734+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.574155+0000 mon.c (mon.1) 171 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.574155+0000 mon.c (mon.1) 171 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.574310+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.574310+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.574728+0000 mon.c (mon.1) 172 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.574728+0000 mon.c (mon.1) 172 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.574896+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.574896+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.575468+0000 mon.c (mon.1) 173 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.575468+0000 mon.c (mon.1) 173 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.575622+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.575622+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.576032+0000 mon.c (mon.1) 174 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.576032+0000 mon.c (mon.1) 174 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.576181+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.576181+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.576584+0000 mon.c (mon.1) 175 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.576584+0000 mon.c (mon.1) 175 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.576738+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.576738+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: cephadm 2026-03-09T18:45:11.577085+0000 mgr.y (mgr.44107) 145 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: cephadm 2026-03-09T18:45:11.577085+0000 mgr.y (mgr.44107) 145 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.577335+0000 mon.c (mon.1) 176 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.577335+0000 mon.c (mon.1) 176 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.577492+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.577492+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.580995+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.580995+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.581458+0000 mon.c (mon.1) 177 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.581458+0000 mon.c (mon.1) 177 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.582720+0000 mon.c (mon.1) 178 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.582720+0000 mon.c (mon.1) 178 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.583218+0000 mon.c (mon.1) 179 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.583218+0000 mon.c (mon.1) 179 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.587873+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.587873+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.627249+0000 mon.c (mon.1) 180 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.627249+0000 mon.c (mon.1) 180 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.628515+0000 mon.c (mon.1) 181 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.628515+0000 mon.c (mon.1) 181 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.629053+0000 mon.c (mon.1) 182 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.629053+0000 mon.c (mon.1) 182 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.633811+0000 mon.a (mon.0) 250 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: audit 2026-03-09T18:45:11.633811+0000 mon.a (mon.0) 250 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: cluster 2026-03-09T18:45:12.068206+0000 mgr.y (mgr.44107) 146 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:12.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:12 vm08 bash[46122]: cluster 2026-03-09T18:45:12.068206+0000 mgr.y (mgr.44107) 146 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:12.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.452402+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.452402+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.460977+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.460977+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.464992+0000 mon.c (mon.1) 147 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:12.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.464992+0000 mon.c (mon.1) 147 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:12.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.466165+0000 mon.c (mon.1) 148 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:45:12.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.466165+0000 mon.c (mon.1) 148 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:45:12.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.478046+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.478046+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.501906+0000 mgr.y (mgr.44107) 142 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.501906+0000 mgr.y (mgr.44107) 142 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.523704+0000 mon.c (mon.1) 149 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.523704+0000 mon.c (mon.1) 149 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.525177+0000 mon.c (mon.1) 150 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.525177+0000 mon.c (mon.1) 150 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.526574+0000 mon.c (mon.1) 151 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.526574+0000 mon.c (mon.1) 151 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: cephadm 2026-03-09T18:45:11.527299+0000 mgr.y (mgr.44107) 143 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: cephadm 2026-03-09T18:45:11.527299+0000 mgr.y (mgr.44107) 143 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.532489+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.532489+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.535568+0000 mon.c (mon.1) 152 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.535568+0000 mon.c (mon.1) 152 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.536764+0000 mon.c (mon.1) 153 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.536764+0000 mon.c (mon.1) 153 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.537747+0000 mon.c (mon.1) 154 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.537747+0000 mon.c (mon.1) 154 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.538663+0000 mon.c (mon.1) 155 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.538663+0000 mon.c (mon.1) 155 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.539545+0000 mon.c (mon.1) 156 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.539545+0000 mon.c (mon.1) 156 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.540515+0000 mon.c (mon.1) 157 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.540515+0000 mon.c (mon.1) 157 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: cephadm 2026-03-09T18:45:11.541180+0000 mgr.y (mgr.44107) 144 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: cephadm 2026-03-09T18:45:11.541180+0000 mgr.y (mgr.44107) 144 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.542058+0000 mon.c (mon.1) 158 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.542058+0000 mon.c (mon.1) 158 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.542363+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.542363+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.546557+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.546557+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.548585+0000 mon.c (mon.1) 159 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.548585+0000 mon.c (mon.1) 159 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.549304+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.549304+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.553467+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.553467+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.555626+0000 mon.c (mon.1) 160 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.555626+0000 mon.c (mon.1) 160 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.555825+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.555825+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.559470+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.559470+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.561435+0000 mon.c (mon.1) 161 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.561435+0000 mon.c (mon.1) 161 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.561639+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.561639+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.562087+0000 mon.c (mon.1) 162 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.562087+0000 mon.c (mon.1) 162 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.562253+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.562253+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.562676+0000 mon.c (mon.1) 163 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.562676+0000 mon.c (mon.1) 163 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.562860+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.562860+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.563552+0000 mon.c (mon.1) 164 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.563552+0000 mon.c (mon.1) 164 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.563737+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.563737+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.564268+0000 mon.c (mon.1) 165 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.564268+0000 mon.c (mon.1) 165 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.564463+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.564463+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.564987+0000 mon.c (mon.1) 166 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.564987+0000 mon.c (mon.1) 166 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.565191+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.565191+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.565736+0000 mon.c (mon.1) 167 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.565736+0000 mon.c (mon.1) 167 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.565929+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.565929+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.567064+0000 mon.c (mon.1) 168 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.567064+0000 mon.c (mon.1) 168 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.567230+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.567230+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.567777+0000 mon.c (mon.1) 169 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.567777+0000 mon.c (mon.1) 169 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.567977+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.567977+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.571854+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.571854+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.573556+0000 mon.c (mon.1) 170 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.573556+0000 mon.c (mon.1) 170 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.573734+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.573734+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.574155+0000 mon.c (mon.1) 171 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.574155+0000 mon.c (mon.1) 171 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.574310+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.574310+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.574728+0000 mon.c (mon.1) 172 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.574728+0000 mon.c (mon.1) 172 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.574896+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.574896+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.575468+0000 mon.c (mon.1) 173 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.575468+0000 mon.c (mon.1) 173 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.575622+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.575622+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.576032+0000 mon.c (mon.1) 174 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.576032+0000 mon.c (mon.1) 174 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.576181+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.576181+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.576584+0000 mon.c (mon.1) 175 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.576584+0000 mon.c (mon.1) 175 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.576738+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.576738+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: cephadm 2026-03-09T18:45:11.577085+0000 mgr.y (mgr.44107) 145 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: cephadm 2026-03-09T18:45:11.577085+0000 mgr.y (mgr.44107) 145 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.577335+0000 mon.c (mon.1) 176 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.577335+0000 mon.c (mon.1) 176 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.577492+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.577492+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.580995+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.580995+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.581458+0000 mon.c (mon.1) 177 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.581458+0000 mon.c (mon.1) 177 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.582720+0000 mon.c (mon.1) 178 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.582720+0000 mon.c (mon.1) 178 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.583218+0000 mon.c (mon.1) 179 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:45:12.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.583218+0000 mon.c (mon.1) 179 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:45:12.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.587873+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.587873+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.627249+0000 mon.c (mon.1) 180 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:12.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.627249+0000 mon.c (mon.1) 180 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:12.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.628515+0000 mon.c (mon.1) 181 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:12.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.628515+0000 mon.c (mon.1) 181 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:12.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.629053+0000 mon.c (mon.1) 182 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:45:12.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.629053+0000 mon.c (mon.1) 182 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:45:12.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.633811+0000 mon.a (mon.0) 250 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: audit 2026-03-09T18:45:11.633811+0000 mon.a (mon.0) 250 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: cluster 2026-03-09T18:45:12.068206+0000 mgr.y (mgr.44107) 146 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:12.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:12 vm00 bash[65531]: cluster 2026-03-09T18:45:12.068206+0000 mgr.y (mgr.44107) 146 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.452402+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.452402+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.460977+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.460977+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.464992+0000 mon.c (mon.1) 147 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.464992+0000 mon.c (mon.1) 147 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.466165+0000 mon.c (mon.1) 148 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:45:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.466165+0000 mon.c (mon.1) 148 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:45:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.478046+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.478046+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.501906+0000 mgr.y (mgr.44107) 142 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.501906+0000 mgr.y (mgr.44107) 142 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.523704+0000 mon.c (mon.1) 149 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.523704+0000 mon.c (mon.1) 149 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.525177+0000 mon.c (mon.1) 150 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.525177+0000 mon.c (mon.1) 150 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.526574+0000 mon.c (mon.1) 151 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.526574+0000 mon.c (mon.1) 151 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: cephadm 2026-03-09T18:45:11.527299+0000 mgr.y (mgr.44107) 143 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: cephadm 2026-03-09T18:45:11.527299+0000 mgr.y (mgr.44107) 143 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.532489+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.532489+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.535568+0000 mon.c (mon.1) 152 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.535568+0000 mon.c (mon.1) 152 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.536764+0000 mon.c (mon.1) 153 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.536764+0000 mon.c (mon.1) 153 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.537747+0000 mon.c (mon.1) 154 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.537747+0000 mon.c (mon.1) 154 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.538663+0000 mon.c (mon.1) 155 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.538663+0000 mon.c (mon.1) 155 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.539545+0000 mon.c (mon.1) 156 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.539545+0000 mon.c (mon.1) 156 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.540515+0000 mon.c (mon.1) 157 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.540515+0000 mon.c (mon.1) 157 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: cephadm 2026-03-09T18:45:11.541180+0000 mgr.y (mgr.44107) 144 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: cephadm 2026-03-09T18:45:11.541180+0000 mgr.y (mgr.44107) 144 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.542058+0000 mon.c (mon.1) 158 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.542058+0000 mon.c (mon.1) 158 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.542363+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.542363+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.546557+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.546557+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.548585+0000 mon.c (mon.1) 159 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.548585+0000 mon.c (mon.1) 159 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.549304+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.549304+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.553467+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.553467+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.555626+0000 mon.c (mon.1) 160 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.555626+0000 mon.c (mon.1) 160 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.555825+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.555825+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.559470+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.559470+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.561435+0000 mon.c (mon.1) 161 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.561435+0000 mon.c (mon.1) 161 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.561639+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.561639+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.562087+0000 mon.c (mon.1) 162 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.562087+0000 mon.c (mon.1) 162 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.562253+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.562253+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.562676+0000 mon.c (mon.1) 163 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.562676+0000 mon.c (mon.1) 163 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.562860+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.562860+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.563552+0000 mon.c (mon.1) 164 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.563552+0000 mon.c (mon.1) 164 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.563737+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.563737+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.564268+0000 mon.c (mon.1) 165 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.564268+0000 mon.c (mon.1) 165 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.564463+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.564463+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.564987+0000 mon.c (mon.1) 166 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.564987+0000 mon.c (mon.1) 166 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:45:12.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.565191+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.565191+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.565736+0000 mon.c (mon.1) 167 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.565736+0000 mon.c (mon.1) 167 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.565929+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.565929+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.567064+0000 mon.c (mon.1) 168 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.567064+0000 mon.c (mon.1) 168 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.567230+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.567230+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.567777+0000 mon.c (mon.1) 169 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.567777+0000 mon.c (mon.1) 169 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.567977+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.567977+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.571854+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.571854+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.573556+0000 mon.c (mon.1) 170 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.573556+0000 mon.c (mon.1) 170 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.573734+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.573734+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.574155+0000 mon.c (mon.1) 171 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.574155+0000 mon.c (mon.1) 171 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.574310+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.574310+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.574728+0000 mon.c (mon.1) 172 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.574728+0000 mon.c (mon.1) 172 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.574896+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.574896+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.575468+0000 mon.c (mon.1) 173 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.575468+0000 mon.c (mon.1) 173 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.575622+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.575622+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.576032+0000 mon.c (mon.1) 174 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.576032+0000 mon.c (mon.1) 174 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.576181+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.576181+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.576584+0000 mon.c (mon.1) 175 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.576584+0000 mon.c (mon.1) 175 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.576738+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.576738+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: cephadm 2026-03-09T18:45:11.577085+0000 mgr.y (mgr.44107) 145 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: cephadm 2026-03-09T18:45:11.577085+0000 mgr.y (mgr.44107) 145 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.577335+0000 mon.c (mon.1) 176 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.577335+0000 mon.c (mon.1) 176 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.577492+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.577492+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.580995+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.580995+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.581458+0000 mon.c (mon.1) 177 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.581458+0000 mon.c (mon.1) 177 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.582720+0000 mon.c (mon.1) 178 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.582720+0000 mon.c (mon.1) 178 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.583218+0000 mon.c (mon.1) 179 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.583218+0000 mon.c (mon.1) 179 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.587873+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.587873+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.627249+0000 mon.c (mon.1) 180 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.627249+0000 mon.c (mon.1) 180 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.628515+0000 mon.c (mon.1) 181 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.628515+0000 mon.c (mon.1) 181 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.629053+0000 mon.c (mon.1) 182 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.629053+0000 mon.c (mon.1) 182 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.633811+0000 mon.a (mon.0) 250 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: audit 2026-03-09T18:45:11.633811+0000 mon.a (mon.0) 250 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:12.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: cluster 2026-03-09T18:45:12.068206+0000 mgr.y (mgr.44107) 146 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:12.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:12 vm00 bash[69512]: cluster 2026-03-09T18:45:12.068206+0000 mgr.y (mgr.44107) 146 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:14.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:14 vm00 bash[65531]: audit 2026-03-09T18:45:13.118103+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:14.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:14 vm00 bash[65531]: audit 2026-03-09T18:45:13.118103+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:14.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:14 vm00 bash[69512]: audit 2026-03-09T18:45:13.118103+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:14.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:14 vm00 bash[69512]: audit 2026-03-09T18:45:13.118103+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:14.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:14 vm08 bash[46122]: audit 2026-03-09T18:45:13.118103+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:14.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:14 vm08 bash[46122]: audit 2026-03-09T18:45:13.118103+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:15.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:15 vm00 bash[65531]: cluster 2026-03-09T18:45:14.068541+0000 mgr.y (mgr.44107) 147 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:45:15.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:15 vm00 bash[65531]: cluster 2026-03-09T18:45:14.068541+0000 mgr.y (mgr.44107) 147 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:45:15.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:15 vm00 bash[69512]: cluster 2026-03-09T18:45:14.068541+0000 mgr.y (mgr.44107) 147 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:45:15.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:15 vm00 bash[69512]: cluster 2026-03-09T18:45:14.068541+0000 mgr.y (mgr.44107) 147 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:45:15.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:15 vm08 bash[46122]: cluster 2026-03-09T18:45:14.068541+0000 mgr.y (mgr.44107) 147 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:45:15.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:15 vm08 bash[46122]: cluster 2026-03-09T18:45:14.068541+0000 mgr.y (mgr.44107) 147 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:45:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:17 vm00 bash[65531]: cluster 2026-03-09T18:45:16.069038+0000 mgr.y (mgr.44107) 148 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:45:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:17 vm00 bash[65531]: cluster 2026-03-09T18:45:16.069038+0000 mgr.y (mgr.44107) 148 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:45:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:17 vm00 bash[69512]: cluster 2026-03-09T18:45:16.069038+0000 mgr.y (mgr.44107) 148 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:45:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:17 vm00 bash[69512]: cluster 2026-03-09T18:45:16.069038+0000 mgr.y (mgr.44107) 148 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:45:17.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:17 vm08 bash[46122]: cluster 2026-03-09T18:45:16.069038+0000 mgr.y (mgr.44107) 148 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:45:17.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:17 vm08 bash[46122]: cluster 2026-03-09T18:45:16.069038+0000 mgr.y (mgr.44107) 148 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:45:19.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:19 vm00 bash[65531]: cluster 2026-03-09T18:45:18.069366+0000 mgr.y (mgr.44107) 149 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:19.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:19 vm00 bash[65531]: cluster 2026-03-09T18:45:18.069366+0000 mgr.y (mgr.44107) 149 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:19.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:19 vm00 bash[65531]: audit 2026-03-09T18:45:18.103254+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:19.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:19 vm00 bash[65531]: audit 2026-03-09T18:45:18.103254+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:19.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:19 vm00 bash[65531]: audit 2026-03-09T18:45:18.107040+0000 mon.c (mon.1) 183 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:45:19.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:19 vm00 bash[65531]: audit 2026-03-09T18:45:18.107040+0000 mon.c (mon.1) 183 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:45:19.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:19 vm00 bash[65531]: audit 2026-03-09T18:45:18.125646+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:19.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:19 vm00 bash[65531]: audit 2026-03-09T18:45:18.125646+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:19.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:19 vm00 bash[69512]: cluster 2026-03-09T18:45:18.069366+0000 mgr.y (mgr.44107) 149 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:19.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:19 vm00 bash[69512]: cluster 2026-03-09T18:45:18.069366+0000 mgr.y (mgr.44107) 149 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:19.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:19 vm00 bash[69512]: audit 2026-03-09T18:45:18.103254+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:19.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:19 vm00 bash[69512]: audit 2026-03-09T18:45:18.103254+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:19.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:19 vm00 bash[69512]: audit 2026-03-09T18:45:18.107040+0000 mon.c (mon.1) 183 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:45:19.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:19 vm00 bash[69512]: audit 2026-03-09T18:45:18.107040+0000 mon.c (mon.1) 183 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:45:19.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:19 vm00 bash[69512]: audit 2026-03-09T18:45:18.125646+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:19.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:19 vm00 bash[69512]: audit 2026-03-09T18:45:18.125646+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:19.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:19 vm08 bash[46122]: cluster 2026-03-09T18:45:18.069366+0000 mgr.y (mgr.44107) 149 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:19.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:19 vm08 bash[46122]: cluster 2026-03-09T18:45:18.069366+0000 mgr.y (mgr.44107) 149 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:19.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:19 vm08 bash[46122]: audit 2026-03-09T18:45:18.103254+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:19.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:19 vm08 bash[46122]: audit 2026-03-09T18:45:18.103254+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:19.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:19 vm08 bash[46122]: audit 2026-03-09T18:45:18.107040+0000 mon.c (mon.1) 183 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:45:19.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:19 vm08 bash[46122]: audit 2026-03-09T18:45:18.107040+0000 mon.c (mon.1) 183 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:45:19.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:19 vm08 bash[46122]: audit 2026-03-09T18:45:18.125646+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:19.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:19 vm08 bash[46122]: audit 2026-03-09T18:45:18.125646+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:19.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:45:19 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:45:19] "GET /metrics HTTP/1.1" 200 37800 "" "Prometheus/2.51.0" 2026-03-09T18:45:21.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:21 vm00 bash[65531]: cluster 2026-03-09T18:45:20.069779+0000 mgr.y (mgr.44107) 150 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:21.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:21 vm00 bash[65531]: cluster 2026-03-09T18:45:20.069779+0000 mgr.y (mgr.44107) 150 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:21.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:21 vm00 bash[69512]: cluster 2026-03-09T18:45:20.069779+0000 mgr.y (mgr.44107) 150 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:21.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:21 vm00 bash[69512]: cluster 2026-03-09T18:45:20.069779+0000 mgr.y (mgr.44107) 150 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:21.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:21 vm08 bash[46122]: cluster 2026-03-09T18:45:20.069779+0000 mgr.y (mgr.44107) 150 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:21.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:21 vm08 bash[46122]: cluster 2026-03-09T18:45:20.069779+0000 mgr.y (mgr.44107) 150 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:23.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:23 vm00 bash[65531]: audit 2026-03-09T18:45:21.511256+0000 mgr.y (mgr.44107) 151 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:23.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:23 vm00 bash[65531]: audit 2026-03-09T18:45:21.511256+0000 mgr.y (mgr.44107) 151 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:23.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:23 vm00 bash[65531]: cluster 2026-03-09T18:45:22.070122+0000 mgr.y (mgr.44107) 152 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:23.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:23 vm00 bash[65531]: cluster 2026-03-09T18:45:22.070122+0000 mgr.y (mgr.44107) 152 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:23.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:23 vm00 bash[69512]: audit 2026-03-09T18:45:21.511256+0000 mgr.y (mgr.44107) 151 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:23.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:23 vm00 bash[69512]: audit 2026-03-09T18:45:21.511256+0000 mgr.y (mgr.44107) 151 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:23.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:23 vm00 bash[69512]: cluster 2026-03-09T18:45:22.070122+0000 mgr.y (mgr.44107) 152 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:23.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:23 vm00 bash[69512]: cluster 2026-03-09T18:45:22.070122+0000 mgr.y (mgr.44107) 152 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:23.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:23 vm08 bash[46122]: audit 2026-03-09T18:45:21.511256+0000 mgr.y (mgr.44107) 151 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:23.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:23 vm08 bash[46122]: audit 2026-03-09T18:45:21.511256+0000 mgr.y (mgr.44107) 151 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:23.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:23 vm08 bash[46122]: cluster 2026-03-09T18:45:22.070122+0000 mgr.y (mgr.44107) 152 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:23.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:23 vm08 bash[46122]: cluster 2026-03-09T18:45:22.070122+0000 mgr.y (mgr.44107) 152 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:25.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:25 vm00 bash[65531]: cluster 2026-03-09T18:45:24.070432+0000 mgr.y (mgr.44107) 153 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:25.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:25 vm00 bash[65531]: cluster 2026-03-09T18:45:24.070432+0000 mgr.y (mgr.44107) 153 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:25.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:25 vm00 bash[69512]: cluster 2026-03-09T18:45:24.070432+0000 mgr.y (mgr.44107) 153 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:25.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:25 vm00 bash[69512]: cluster 2026-03-09T18:45:24.070432+0000 mgr.y (mgr.44107) 153 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:25.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:25 vm08 bash[46122]: cluster 2026-03-09T18:45:24.070432+0000 mgr.y (mgr.44107) 153 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:25.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:25 vm08 bash[46122]: cluster 2026-03-09T18:45:24.070432+0000 mgr.y (mgr.44107) 153 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:27.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:27 vm00 bash[65531]: cluster 2026-03-09T18:45:26.070916+0000 mgr.y (mgr.44107) 154 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:27.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:27 vm00 bash[65531]: cluster 2026-03-09T18:45:26.070916+0000 mgr.y (mgr.44107) 154 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:27.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:27 vm00 bash[69512]: cluster 2026-03-09T18:45:26.070916+0000 mgr.y (mgr.44107) 154 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:27.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:27 vm00 bash[69512]: cluster 2026-03-09T18:45:26.070916+0000 mgr.y (mgr.44107) 154 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:27.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:27 vm08 bash[46122]: cluster 2026-03-09T18:45:26.070916+0000 mgr.y (mgr.44107) 154 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:27.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:27 vm08 bash[46122]: cluster 2026-03-09T18:45:26.070916+0000 mgr.y (mgr.44107) 154 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:29.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:29 vm08 bash[46122]: cluster 2026-03-09T18:45:28.071271+0000 mgr.y (mgr.44107) 155 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:29.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:29 vm08 bash[46122]: cluster 2026-03-09T18:45:28.071271+0000 mgr.y (mgr.44107) 155 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:29.524 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:29 vm00 bash[65531]: cluster 2026-03-09T18:45:28.071271+0000 mgr.y (mgr.44107) 155 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:29.524 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:29 vm00 bash[65531]: cluster 2026-03-09T18:45:28.071271+0000 mgr.y (mgr.44107) 155 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:29.524 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:29 vm00 bash[69512]: cluster 2026-03-09T18:45:28.071271+0000 mgr.y (mgr.44107) 155 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:29.524 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:29 vm00 bash[69512]: cluster 2026-03-09T18:45:28.071271+0000 mgr.y (mgr.44107) 155 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:29.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:45:29 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:45:29] "GET /metrics HTTP/1.1" 200 37800 "" "Prometheus/2.51.0" 2026-03-09T18:45:31.514 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:31 vm00 bash[65531]: cluster 2026-03-09T18:45:30.071651+0000 mgr.y (mgr.44107) 156 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:31.514 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:31 vm00 bash[65531]: cluster 2026-03-09T18:45:30.071651+0000 mgr.y (mgr.44107) 156 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:31.514 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:31 vm00 bash[69512]: cluster 2026-03-09T18:45:30.071651+0000 mgr.y (mgr.44107) 156 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:31.514 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:31 vm00 bash[69512]: cluster 2026-03-09T18:45:30.071651+0000 mgr.y (mgr.44107) 156 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:31.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:31 vm08 bash[46122]: cluster 2026-03-09T18:45:30.071651+0000 mgr.y (mgr.44107) 156 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:31.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:31 vm08 bash[46122]: cluster 2026-03-09T18:45:30.071651+0000 mgr.y (mgr.44107) 156 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:33.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:33 vm00 bash[65531]: audit 2026-03-09T18:45:31.517617+0000 mgr.y (mgr.44107) 157 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:33.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:33 vm00 bash[65531]: audit 2026-03-09T18:45:31.517617+0000 mgr.y (mgr.44107) 157 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:33.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:33 vm00 bash[65531]: cluster 2026-03-09T18:45:32.071958+0000 mgr.y (mgr.44107) 158 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:33.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:33 vm00 bash[65531]: cluster 2026-03-09T18:45:32.071958+0000 mgr.y (mgr.44107) 158 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:33.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:33 vm00 bash[65531]: audit 2026-03-09T18:45:33.099264+0000 mon.c (mon.1) 184 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:45:33.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:33 vm00 bash[65531]: audit 2026-03-09T18:45:33.099264+0000 mon.c (mon.1) 184 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:45:33.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:33 vm00 bash[69512]: audit 2026-03-09T18:45:31.517617+0000 mgr.y (mgr.44107) 157 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:33.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:33 vm00 bash[69512]: audit 2026-03-09T18:45:31.517617+0000 mgr.y (mgr.44107) 157 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:33.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:33 vm00 bash[69512]: cluster 2026-03-09T18:45:32.071958+0000 mgr.y (mgr.44107) 158 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:33.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:33 vm00 bash[69512]: cluster 2026-03-09T18:45:32.071958+0000 mgr.y (mgr.44107) 158 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:33.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:33 vm00 bash[69512]: audit 2026-03-09T18:45:33.099264+0000 mon.c (mon.1) 184 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:45:33.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:33 vm00 bash[69512]: audit 2026-03-09T18:45:33.099264+0000 mon.c (mon.1) 184 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:45:33.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:33 vm08 bash[46122]: audit 2026-03-09T18:45:31.517617+0000 mgr.y (mgr.44107) 157 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:33.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:33 vm08 bash[46122]: audit 2026-03-09T18:45:31.517617+0000 mgr.y (mgr.44107) 157 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:33.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:33 vm08 bash[46122]: cluster 2026-03-09T18:45:32.071958+0000 mgr.y (mgr.44107) 158 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:33.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:33 vm08 bash[46122]: cluster 2026-03-09T18:45:32.071958+0000 mgr.y (mgr.44107) 158 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:33.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:33 vm08 bash[46122]: audit 2026-03-09T18:45:33.099264+0000 mon.c (mon.1) 184 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:45:33.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:33 vm08 bash[46122]: audit 2026-03-09T18:45:33.099264+0000 mon.c (mon.1) 184 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:45:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:35 vm00 bash[65531]: cluster 2026-03-09T18:45:34.072381+0000 mgr.y (mgr.44107) 159 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:35 vm00 bash[65531]: cluster 2026-03-09T18:45:34.072381+0000 mgr.y (mgr.44107) 159 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:35 vm00 bash[69512]: cluster 2026-03-09T18:45:34.072381+0000 mgr.y (mgr.44107) 159 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:35 vm00 bash[69512]: cluster 2026-03-09T18:45:34.072381+0000 mgr.y (mgr.44107) 159 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:35.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:35 vm08 bash[46122]: cluster 2026-03-09T18:45:34.072381+0000 mgr.y (mgr.44107) 159 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:35.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:35 vm08 bash[46122]: cluster 2026-03-09T18:45:34.072381+0000 mgr.y (mgr.44107) 159 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:37.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:37 vm00 bash[65531]: cluster 2026-03-09T18:45:36.072859+0000 mgr.y (mgr.44107) 160 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:37.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:37 vm00 bash[65531]: cluster 2026-03-09T18:45:36.072859+0000 mgr.y (mgr.44107) 160 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:37.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:37 vm00 bash[69512]: cluster 2026-03-09T18:45:36.072859+0000 mgr.y (mgr.44107) 160 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:37.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:37 vm00 bash[69512]: cluster 2026-03-09T18:45:36.072859+0000 mgr.y (mgr.44107) 160 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:37.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:37 vm08 bash[46122]: cluster 2026-03-09T18:45:36.072859+0000 mgr.y (mgr.44107) 160 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:37.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:37 vm08 bash[46122]: cluster 2026-03-09T18:45:36.072859+0000 mgr.y (mgr.44107) 160 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:39.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:39 vm00 bash[65531]: cluster 2026-03-09T18:45:38.073194+0000 mgr.y (mgr.44107) 161 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:39.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:39 vm00 bash[65531]: cluster 2026-03-09T18:45:38.073194+0000 mgr.y (mgr.44107) 161 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:39.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:39 vm00 bash[69512]: cluster 2026-03-09T18:45:38.073194+0000 mgr.y (mgr.44107) 161 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:39.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:39 vm00 bash[69512]: cluster 2026-03-09T18:45:38.073194+0000 mgr.y (mgr.44107) 161 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:39.629 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:45:39 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:45:39] "GET /metrics HTTP/1.1" 200 37796 "" "Prometheus/2.51.0" 2026-03-09T18:45:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:39 vm08 bash[46122]: cluster 2026-03-09T18:45:38.073194+0000 mgr.y (mgr.44107) 161 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:39 vm08 bash[46122]: cluster 2026-03-09T18:45:38.073194+0000 mgr.y (mgr.44107) 161 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:40.311 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-09T18:45:40.758 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:45:40.758 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (16m) 35s ago 22m 14.5M - 0.25.0 c8568f914cd2 2a8a29ecee54 2026-03-09T18:45:40.758 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (3m) 106s ago 22m 65.0M - 10.4.0 c8b91775d855 5e0e30d27ab2 2026-03-09T18:45:40.758 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (3m) 35s ago 22m 43.8M - 3.5 e1d6a67b021e ff3da66cebe9 2026-03-09T18:45:40.758 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443,9283,8765 running (3m) 106s ago 25m 462M - 19.2.3-678-ge911bdeb 654f31e6858e e51f08afe84e 2026-03-09T18:45:40.758 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:8443,9283,8765 running (13m) 35s ago 26m 525M - 19.2.3-678-ge911bdeb 654f31e6858e 838266ced7b2 2026-03-09T18:45:40.758 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (116s) 35s ago 26m 45.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e eb9fca83668a 2026-03-09T18:45:40.758 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (2m) 106s ago 25m 37.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a343d2673f4 2026-03-09T18:45:40.758 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (2m) 35s ago 25m 44.5M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 5c50cbcbef6b 2026-03-09T18:45:40.758 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (15m) 35s ago 23m 7891k - 1.7.0 72c9c2088986 c2e3e3202fde 2026-03-09T18:45:40.758 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (15m) 106s ago 23m 7956k - 1.7.0 72c9c2088986 7a7d1ed8c801 2026-03-09T18:45:40.758 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (25m) 35s ago 25m 53.7M 4096M 17.2.0 e1d6a67b021e ab692a994bc3 2026-03-09T18:45:40.758 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (25m) 35s ago 25m 56.7M 4096M 17.2.0 e1d6a67b021e d8607e6df9d6 2026-03-09T18:45:40.758 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (40s) 35s ago 24m 21.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9a838e294e64 2026-03-09T18:45:40.758 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (58s) 35s ago 24m 66.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 161fbb574888 2026-03-09T18:45:40.758 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (24m) 106s ago 24m 54.9M 4096M 17.2.0 e1d6a67b021e 781045b06a16 2026-03-09T18:45:40.758 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (24m) 106s ago 24m 53.9M 4096M 17.2.0 e1d6a67b021e 28b71e1b7c1b 2026-03-09T18:45:40.758 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (23m) 106s ago 23m 52.7M 4096M 17.2.0 e1d6a67b021e 80e1a58dd2f5 2026-03-09T18:45:40.758 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (23m) 106s ago 23m 52.2M 4096M 17.2.0 e1d6a67b021e 4f91765b51cf 2026-03-09T18:45:40.759 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (3m) 106s ago 23m 41.3M - 2.51.0 1d3b7f56885b 2dc1789f7bf0 2026-03-09T18:45:40.759 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (22m) 35s ago 22m 89.1M - 17.2.0 e1d6a67b021e 671fa80b7e00 2026-03-09T18:45:40.759 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (22m) 106s ago 22m 89.6M - 17.2.0 e1d6a67b021e 1fbcce983317 2026-03-09T18:45:40.811 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.osd | length == 2'"'"'' 2026-03-09T18:45:41.335 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:45:41.382 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e '"'"'.up_to_date | length == 7'"'"'' 2026-03-09T18:45:41.612 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:41 vm00 bash[65531]: cluster 2026-03-09T18:45:40.073636+0000 mgr.y (mgr.44107) 162 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:41.612 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:41 vm00 bash[65531]: cluster 2026-03-09T18:45:40.073636+0000 mgr.y (mgr.44107) 162 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:41.613 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:41 vm00 bash[65531]: audit 2026-03-09T18:45:40.245066+0000 mgr.y (mgr.44107) 163 : audit [DBG] from='client.54264 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:41.613 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:41 vm00 bash[65531]: audit 2026-03-09T18:45:40.245066+0000 mgr.y (mgr.44107) 163 : audit [DBG] from='client.54264 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:41.613 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:41 vm00 bash[69512]: cluster 2026-03-09T18:45:40.073636+0000 mgr.y (mgr.44107) 162 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:41.613 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:41 vm00 bash[69512]: cluster 2026-03-09T18:45:40.073636+0000 mgr.y (mgr.44107) 162 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:41.613 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:41 vm00 bash[69512]: audit 2026-03-09T18:45:40.245066+0000 mgr.y (mgr.44107) 163 : audit [DBG] from='client.54264 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:41.613 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:41 vm00 bash[69512]: audit 2026-03-09T18:45:40.245066+0000 mgr.y (mgr.44107) 163 : audit [DBG] from='client.54264 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:41.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:41 vm08 bash[46122]: cluster 2026-03-09T18:45:40.073636+0000 mgr.y (mgr.44107) 162 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:41.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:41 vm08 bash[46122]: cluster 2026-03-09T18:45:40.073636+0000 mgr.y (mgr.44107) 162 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:41.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:41 vm08 bash[46122]: audit 2026-03-09T18:45:40.245066+0000 mgr.y (mgr.44107) 163 : audit [DBG] from='client.54264 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:41.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:41 vm08 bash[46122]: audit 2026-03-09T18:45:40.245066+0000 mgr.y (mgr.44107) 163 : audit [DBG] from='client.54264 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:42.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:42 vm00 bash[65531]: audit 2026-03-09T18:45:40.757594+0000 mgr.y (mgr.44107) 164 : audit [DBG] from='client.34253 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:42.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:42 vm00 bash[65531]: audit 2026-03-09T18:45:40.757594+0000 mgr.y (mgr.44107) 164 : audit [DBG] from='client.34253 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:42.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:42 vm00 bash[65531]: audit 2026-03-09T18:45:41.324192+0000 mon.a (mon.0) 254 : audit [DBG] from='client.? 192.168.123.100:0/1193614302' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:42.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:42 vm00 bash[65531]: audit 2026-03-09T18:45:41.324192+0000 mon.a (mon.0) 254 : audit [DBG] from='client.? 192.168.123.100:0/1193614302' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:42.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:42 vm00 bash[69512]: audit 2026-03-09T18:45:40.757594+0000 mgr.y (mgr.44107) 164 : audit [DBG] from='client.34253 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:42.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:42 vm00 bash[69512]: audit 2026-03-09T18:45:40.757594+0000 mgr.y (mgr.44107) 164 : audit [DBG] from='client.34253 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:42.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:42 vm00 bash[69512]: audit 2026-03-09T18:45:41.324192+0000 mon.a (mon.0) 254 : audit [DBG] from='client.? 192.168.123.100:0/1193614302' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:42.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:42 vm00 bash[69512]: audit 2026-03-09T18:45:41.324192+0000 mon.a (mon.0) 254 : audit [DBG] from='client.? 192.168.123.100:0/1193614302' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:42.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:42 vm08 bash[46122]: audit 2026-03-09T18:45:40.757594+0000 mgr.y (mgr.44107) 164 : audit [DBG] from='client.34253 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:42.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:42 vm08 bash[46122]: audit 2026-03-09T18:45:40.757594+0000 mgr.y (mgr.44107) 164 : audit [DBG] from='client.34253 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:42.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:42 vm08 bash[46122]: audit 2026-03-09T18:45:41.324192+0000 mon.a (mon.0) 254 : audit [DBG] from='client.? 192.168.123.100:0/1193614302' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:42.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:42 vm08 bash[46122]: audit 2026-03-09T18:45:41.324192+0000 mon.a (mon.0) 254 : audit [DBG] from='client.? 192.168.123.100:0/1193614302' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:43.254 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:45:43.316 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade status' 2026-03-09T18:45:43.580 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:43 vm00 bash[65531]: audit 2026-03-09T18:45:41.525597+0000 mgr.y (mgr.44107) 165 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:43.580 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:43 vm00 bash[65531]: audit 2026-03-09T18:45:41.525597+0000 mgr.y (mgr.44107) 165 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:43.580 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:43 vm00 bash[65531]: audit 2026-03-09T18:45:41.835108+0000 mgr.y (mgr.44107) 166 : audit [DBG] from='client.44269 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:43.580 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:43 vm00 bash[65531]: audit 2026-03-09T18:45:41.835108+0000 mgr.y (mgr.44107) 166 : audit [DBG] from='client.44269 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:43.580 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:43 vm00 bash[65531]: cluster 2026-03-09T18:45:42.073998+0000 mgr.y (mgr.44107) 167 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:43.580 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:43 vm00 bash[65531]: cluster 2026-03-09T18:45:42.073998+0000 mgr.y (mgr.44107) 167 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:43.580 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:43 vm00 bash[69512]: audit 2026-03-09T18:45:41.525597+0000 mgr.y (mgr.44107) 165 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:43.580 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:43 vm00 bash[69512]: audit 2026-03-09T18:45:41.525597+0000 mgr.y (mgr.44107) 165 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:43.580 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:43 vm00 bash[69512]: audit 2026-03-09T18:45:41.835108+0000 mgr.y (mgr.44107) 166 : audit [DBG] from='client.44269 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:43.580 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:43 vm00 bash[69512]: audit 2026-03-09T18:45:41.835108+0000 mgr.y (mgr.44107) 166 : audit [DBG] from='client.44269 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:43.580 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:43 vm00 bash[69512]: cluster 2026-03-09T18:45:42.073998+0000 mgr.y (mgr.44107) 167 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:43.580 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:43 vm00 bash[69512]: cluster 2026-03-09T18:45:42.073998+0000 mgr.y (mgr.44107) 167 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:43 vm08 bash[46122]: audit 2026-03-09T18:45:41.525597+0000 mgr.y (mgr.44107) 165 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:43 vm08 bash[46122]: audit 2026-03-09T18:45:41.525597+0000 mgr.y (mgr.44107) 165 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:43 vm08 bash[46122]: audit 2026-03-09T18:45:41.835108+0000 mgr.y (mgr.44107) 166 : audit [DBG] from='client.44269 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:43 vm08 bash[46122]: audit 2026-03-09T18:45:41.835108+0000 mgr.y (mgr.44107) 166 : audit [DBG] from='client.44269 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:43 vm08 bash[46122]: cluster 2026-03-09T18:45:42.073998+0000 mgr.y (mgr.44107) 167 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:43 vm08 bash[46122]: cluster 2026-03-09T18:45:42.073998+0000 mgr.y (mgr.44107) 167 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:43.796 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:45:43.796 INFO:teuthology.orchestra.run.vm00.stdout: "target_image": null, 2026-03-09T18:45:43.796 INFO:teuthology.orchestra.run.vm00.stdout: "in_progress": false, 2026-03-09T18:45:43.796 INFO:teuthology.orchestra.run.vm00.stdout: "which": "", 2026-03-09T18:45:43.796 INFO:teuthology.orchestra.run.vm00.stdout: "services_complete": [], 2026-03-09T18:45:43.796 INFO:teuthology.orchestra.run.vm00.stdout: "progress": null, 2026-03-09T18:45:43.796 INFO:teuthology.orchestra.run.vm00.stdout: "message": "", 2026-03-09T18:45:43.796 INFO:teuthology.orchestra.run.vm00.stdout: "is_paused": false 2026-03-09T18:45:43.796 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:45:43.848 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph health detail' 2026-03-09T18:45:44.337 INFO:teuthology.orchestra.run.vm00.stdout:HEALTH_OK 2026-03-09T18:45:44.393 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types crash,osd --limit 1' 2026-03-09T18:45:44.624 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:44 vm00 bash[65531]: audit 2026-03-09T18:45:44.340016+0000 mon.c (mon.1) 185 : audit [DBG] from='client.? 192.168.123.100:0/3700842150' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:45:44.625 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:44 vm00 bash[65531]: audit 2026-03-09T18:45:44.340016+0000 mon.c (mon.1) 185 : audit [DBG] from='client.? 192.168.123.100:0/3700842150' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:45:44.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:44 vm00 bash[69512]: audit 2026-03-09T18:45:44.340016+0000 mon.c (mon.1) 185 : audit [DBG] from='client.? 192.168.123.100:0/3700842150' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:45:44.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:44 vm00 bash[69512]: audit 2026-03-09T18:45:44.340016+0000 mon.c (mon.1) 185 : audit [DBG] from='client.? 192.168.123.100:0/3700842150' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:45:44.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:44 vm08 bash[46122]: audit 2026-03-09T18:45:44.340016+0000 mon.c (mon.1) 185 : audit [DBG] from='client.? 192.168.123.100:0/3700842150' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:45:44.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:44 vm08 bash[46122]: audit 2026-03-09T18:45:44.340016+0000 mon.c (mon.1) 185 : audit [DBG] from='client.? 192.168.123.100:0/3700842150' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:45:45.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:45 vm00 bash[69512]: audit 2026-03-09T18:45:43.800011+0000 mgr.y (mgr.44107) 168 : audit [DBG] from='client.54282 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:45 vm00 bash[69512]: audit 2026-03-09T18:45:43.800011+0000 mgr.y (mgr.44107) 168 : audit [DBG] from='client.54282 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:45 vm00 bash[69512]: cluster 2026-03-09T18:45:44.074316+0000 mgr.y (mgr.44107) 169 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:45 vm00 bash[69512]: cluster 2026-03-09T18:45:44.074316+0000 mgr.y (mgr.44107) 169 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:45 vm00 bash[65531]: audit 2026-03-09T18:45:43.800011+0000 mgr.y (mgr.44107) 168 : audit [DBG] from='client.54282 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:45 vm00 bash[65531]: audit 2026-03-09T18:45:43.800011+0000 mgr.y (mgr.44107) 168 : audit [DBG] from='client.54282 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:45 vm00 bash[65531]: cluster 2026-03-09T18:45:44.074316+0000 mgr.y (mgr.44107) 169 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:45 vm00 bash[65531]: cluster 2026-03-09T18:45:44.074316+0000 mgr.y (mgr.44107) 169 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:45.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:45 vm08 bash[46122]: audit 2026-03-09T18:45:43.800011+0000 mgr.y (mgr.44107) 168 : audit [DBG] from='client.54282 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:45.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:45 vm08 bash[46122]: audit 2026-03-09T18:45:43.800011+0000 mgr.y (mgr.44107) 168 : audit [DBG] from='client.54282 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:45.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:45 vm08 bash[46122]: cluster 2026-03-09T18:45:44.074316+0000 mgr.y (mgr.44107) 169 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:45.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:45 vm08 bash[46122]: cluster 2026-03-09T18:45:44.074316+0000 mgr.y (mgr.44107) 169 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:46.246 INFO:teuthology.orchestra.run.vm00.stdout:Initiating upgrade to quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:45:46.436 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'while ceph orch upgrade status | jq '"'"'.in_progress'"'"' | grep true && ! ceph orch upgrade status | jq '"'"'.message'"'"' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done' 2026-03-09T18:45:46.651 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:46 vm00 bash[69512]: audit 2026-03-09T18:45:44.856196+0000 mgr.y (mgr.44107) 170 : audit [DBG] from='client.44287 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "crash,osd", "limit": 1, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:46.651 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:46 vm00 bash[69512]: audit 2026-03-09T18:45:44.856196+0000 mgr.y (mgr.44107) 170 : audit [DBG] from='client.44287 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "crash,osd", "limit": 1, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:46.651 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:46 vm00 bash[69512]: audit 2026-03-09T18:45:46.244560+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:46.651 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:46 vm00 bash[69512]: audit 2026-03-09T18:45:46.244560+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:46.651 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:46 vm00 bash[69512]: audit 2026-03-09T18:45:46.247339+0000 mon.c (mon.1) 186 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:46.651 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:46 vm00 bash[69512]: audit 2026-03-09T18:45:46.247339+0000 mon.c (mon.1) 186 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:46.651 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:46 vm00 bash[69512]: audit 2026-03-09T18:45:46.251632+0000 mon.c (mon.1) 187 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:46.651 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:46 vm00 bash[69512]: audit 2026-03-09T18:45:46.251632+0000 mon.c (mon.1) 187 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:46.651 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:46 vm00 bash[69512]: audit 2026-03-09T18:45:46.253595+0000 mon.c (mon.1) 188 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:45:46.651 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:46 vm00 bash[69512]: audit 2026-03-09T18:45:46.253595+0000 mon.c (mon.1) 188 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:45:46.651 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:46 vm00 bash[69512]: audit 2026-03-09T18:45:46.304820+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:46.651 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:46 vm00 bash[69512]: audit 2026-03-09T18:45:46.304820+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:46.651 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:46 vm00 bash[65531]: audit 2026-03-09T18:45:44.856196+0000 mgr.y (mgr.44107) 170 : audit [DBG] from='client.44287 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "crash,osd", "limit": 1, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:46.651 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:46 vm00 bash[65531]: audit 2026-03-09T18:45:44.856196+0000 mgr.y (mgr.44107) 170 : audit [DBG] from='client.44287 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "crash,osd", "limit": 1, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:46.651 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:46 vm00 bash[65531]: audit 2026-03-09T18:45:46.244560+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:46.651 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:46 vm00 bash[65531]: audit 2026-03-09T18:45:46.244560+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:46.651 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:46 vm00 bash[65531]: audit 2026-03-09T18:45:46.247339+0000 mon.c (mon.1) 186 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:46.651 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:46 vm00 bash[65531]: audit 2026-03-09T18:45:46.247339+0000 mon.c (mon.1) 186 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:46.651 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:46 vm00 bash[65531]: audit 2026-03-09T18:45:46.251632+0000 mon.c (mon.1) 187 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:46.651 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:46 vm00 bash[65531]: audit 2026-03-09T18:45:46.251632+0000 mon.c (mon.1) 187 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:46.651 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:46 vm00 bash[65531]: audit 2026-03-09T18:45:46.253595+0000 mon.c (mon.1) 188 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:45:46.651 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:46 vm00 bash[65531]: audit 2026-03-09T18:45:46.253595+0000 mon.c (mon.1) 188 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:45:46.651 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:46 vm00 bash[65531]: audit 2026-03-09T18:45:46.304820+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:46.651 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:46 vm00 bash[65531]: audit 2026-03-09T18:45:46.304820+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:46 vm08 bash[46122]: audit 2026-03-09T18:45:44.856196+0000 mgr.y (mgr.44107) 170 : audit [DBG] from='client.44287 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "crash,osd", "limit": 1, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:46 vm08 bash[46122]: audit 2026-03-09T18:45:44.856196+0000 mgr.y (mgr.44107) 170 : audit [DBG] from='client.44287 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "crash,osd", "limit": 1, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:46 vm08 bash[46122]: audit 2026-03-09T18:45:46.244560+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:46 vm08 bash[46122]: audit 2026-03-09T18:45:46.244560+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:46 vm08 bash[46122]: audit 2026-03-09T18:45:46.247339+0000 mon.c (mon.1) 186 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:46 vm08 bash[46122]: audit 2026-03-09T18:45:46.247339+0000 mon.c (mon.1) 186 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:46 vm08 bash[46122]: audit 2026-03-09T18:45:46.251632+0000 mon.c (mon.1) 187 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:46 vm08 bash[46122]: audit 2026-03-09T18:45:46.251632+0000 mon.c (mon.1) 187 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:46 vm08 bash[46122]: audit 2026-03-09T18:45:46.253595+0000 mon.c (mon.1) 188 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:45:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:46 vm08 bash[46122]: audit 2026-03-09T18:45:46.253595+0000 mon.c (mon.1) 188 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:45:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:46 vm08 bash[46122]: audit 2026-03-09T18:45:46.304820+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:46 vm08 bash[46122]: audit 2026-03-09T18:45:46.304820+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:46.990 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:45:47.385 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:45:47.385 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (16m) 42s ago 23m 14.5M - 0.25.0 c8568f914cd2 2a8a29ecee54 2026-03-09T18:45:47.385 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (3m) 113s ago 22m 65.0M - 10.4.0 c8b91775d855 5e0e30d27ab2 2026-03-09T18:45:47.385 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (3m) 42s ago 22m 43.8M - 3.5 e1d6a67b021e ff3da66cebe9 2026-03-09T18:45:47.385 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443,9283,8765 running (3m) 113s ago 25m 462M - 19.2.3-678-ge911bdeb 654f31e6858e e51f08afe84e 2026-03-09T18:45:47.385 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:8443,9283,8765 running (13m) 42s ago 26m 525M - 19.2.3-678-ge911bdeb 654f31e6858e 838266ced7b2 2026-03-09T18:45:47.385 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (2m) 42s ago 26m 45.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e eb9fca83668a 2026-03-09T18:45:47.385 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (2m) 113s ago 25m 37.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a343d2673f4 2026-03-09T18:45:47.385 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (2m) 42s ago 25m 44.5M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 5c50cbcbef6b 2026-03-09T18:45:47.385 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (16m) 42s ago 23m 7891k - 1.7.0 72c9c2088986 c2e3e3202fde 2026-03-09T18:45:47.385 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (16m) 113s ago 23m 7956k - 1.7.0 72c9c2088986 7a7d1ed8c801 2026-03-09T18:45:47.385 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (25m) 42s ago 25m 53.7M 4096M 17.2.0 e1d6a67b021e ab692a994bc3 2026-03-09T18:45:47.385 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (25m) 42s ago 25m 56.7M 4096M 17.2.0 e1d6a67b021e d8607e6df9d6 2026-03-09T18:45:47.385 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (47s) 42s ago 24m 21.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9a838e294e64 2026-03-09T18:45:47.385 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (64s) 42s ago 24m 66.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 161fbb574888 2026-03-09T18:45:47.385 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (24m) 113s ago 24m 54.9M 4096M 17.2.0 e1d6a67b021e 781045b06a16 2026-03-09T18:45:47.385 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (24m) 113s ago 24m 53.9M 4096M 17.2.0 e1d6a67b021e 28b71e1b7c1b 2026-03-09T18:45:47.385 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (23m) 113s ago 23m 52.7M 4096M 17.2.0 e1d6a67b021e 80e1a58dd2f5 2026-03-09T18:45:47.385 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (23m) 113s ago 23m 52.2M 4096M 17.2.0 e1d6a67b021e 4f91765b51cf 2026-03-09T18:45:47.385 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (3m) 113s ago 23m 41.3M - 2.51.0 1d3b7f56885b 2dc1789f7bf0 2026-03-09T18:45:47.385 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (22m) 42s ago 22m 89.1M - 17.2.0 e1d6a67b021e 671fa80b7e00 2026-03-09T18:45:47.385 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (22m) 113s ago 22m 89.6M - 17.2.0 e1d6a67b021e 1fbcce983317 2026-03-09T18:45:47.630 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:45:47.630 INFO:teuthology.orchestra.run.vm00.stdout: "mon": { 2026-03-09T18:45:47.630 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-09T18:45:47.630 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:45:47.630 INFO:teuthology.orchestra.run.vm00.stdout: "mgr": { 2026-03-09T18:45:47.630 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:45:47.630 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:45:47.630 INFO:teuthology.orchestra.run.vm00.stdout: "osd": { 2026-03-09T18:45:47.631 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 6, 2026-03-09T18:45:47.631 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:45:47.631 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:45:47.631 INFO:teuthology.orchestra.run.vm00.stdout: "rgw": { 2026-03-09T18:45:47.631 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-09T18:45:47.631 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:45:47.631 INFO:teuthology.orchestra.run.vm00.stdout: "overall": { 2026-03-09T18:45:47.631 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8, 2026-03-09T18:45:47.631 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 7 2026-03-09T18:45:47.631 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:45:47.631 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:45:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:47 vm08 bash[46122]: cluster 2026-03-09T18:45:46.074899+0000 mgr.y (mgr.44107) 171 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:47 vm08 bash[46122]: cluster 2026-03-09T18:45:46.074899+0000 mgr.y (mgr.44107) 171 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:47 vm08 bash[46122]: cephadm 2026-03-09T18:45:46.238792+0000 mgr.y (mgr.44107) 172 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:45:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:47 vm08 bash[46122]: cephadm 2026-03-09T18:45:46.238792+0000 mgr.y (mgr.44107) 172 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:45:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:47 vm08 bash[46122]: cephadm 2026-03-09T18:45:46.373558+0000 mgr.y (mgr.44107) 173 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:45:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:47 vm08 bash[46122]: cephadm 2026-03-09T18:45:46.373558+0000 mgr.y (mgr.44107) 173 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:45:47.729 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:47 vm00 bash[69512]: cluster 2026-03-09T18:45:46.074899+0000 mgr.y (mgr.44107) 171 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:47.729 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:47 vm00 bash[69512]: cluster 2026-03-09T18:45:46.074899+0000 mgr.y (mgr.44107) 171 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:47.729 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:47 vm00 bash[69512]: cephadm 2026-03-09T18:45:46.238792+0000 mgr.y (mgr.44107) 172 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:45:47.729 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:47 vm00 bash[69512]: cephadm 2026-03-09T18:45:46.238792+0000 mgr.y (mgr.44107) 172 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:45:47.729 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:47 vm00 bash[69512]: cephadm 2026-03-09T18:45:46.373558+0000 mgr.y (mgr.44107) 173 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:45:47.729 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:47 vm00 bash[69512]: cephadm 2026-03-09T18:45:46.373558+0000 mgr.y (mgr.44107) 173 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:45:47.729 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:47 vm00 bash[65531]: cluster 2026-03-09T18:45:46.074899+0000 mgr.y (mgr.44107) 171 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:47.729 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:47 vm00 bash[65531]: cluster 2026-03-09T18:45:46.074899+0000 mgr.y (mgr.44107) 171 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:47.729 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:47 vm00 bash[65531]: cephadm 2026-03-09T18:45:46.238792+0000 mgr.y (mgr.44107) 172 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:45:47.729 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:47 vm00 bash[65531]: cephadm 2026-03-09T18:45:46.238792+0000 mgr.y (mgr.44107) 172 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:45:47.729 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:47 vm00 bash[65531]: cephadm 2026-03-09T18:45:46.373558+0000 mgr.y (mgr.44107) 173 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:45:47.729 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:47 vm00 bash[65531]: cephadm 2026-03-09T18:45:46.373558+0000 mgr.y (mgr.44107) 173 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:45:47.905 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:45:47.905 INFO:teuthology.orchestra.run.vm00.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T18:45:47.905 INFO:teuthology.orchestra.run.vm00.stdout: "in_progress": true, 2026-03-09T18:45:47.905 INFO:teuthology.orchestra.run.vm00.stdout: "which": "Upgrading daemons of type(s) crash,osd. Upgrade limited to 1 daemons (1 remaining).", 2026-03-09T18:45:47.905 INFO:teuthology.orchestra.run.vm00.stdout: "services_complete": [], 2026-03-09T18:45:47.906 INFO:teuthology.orchestra.run.vm00.stdout: "progress": "2/8 daemons upgraded", 2026-03-09T18:45:47.906 INFO:teuthology.orchestra.run.vm00.stdout: "message": "Currently upgrading osd daemons", 2026-03-09T18:45:47.906 INFO:teuthology.orchestra.run.vm00.stdout: "is_paused": false 2026-03-09T18:45:47.906 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:46.982898+0000 mgr.y (mgr.44107) 174 : audit [DBG] from='client.54288 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:46.982898+0000 mgr.y (mgr.44107) 174 : audit [DBG] from='client.54288 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:47.183900+0000 mgr.y (mgr.44107) 175 : audit [DBG] from='client.54294 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:47.183900+0000 mgr.y (mgr.44107) 175 : audit [DBG] from='client.54294 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:47.384713+0000 mgr.y (mgr.44107) 176 : audit [DBG] from='client.54297 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:47.384713+0000 mgr.y (mgr.44107) 176 : audit [DBG] from='client.54297 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:47.634352+0000 mon.a (mon.0) 257 : audit [DBG] from='client.? 192.168.123.100:0/1915848130' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:47.634352+0000 mon.a (mon.0) 257 : audit [DBG] from='client.? 192.168.123.100:0/1915848130' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:47.870380+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:47.870380+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: cephadm 2026-03-09T18:45:47.873178+0000 mgr.y (mgr.44107) 177 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: cephadm 2026-03-09T18:45:47.873178+0000 mgr.y (mgr.44107) 177 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: cephadm 2026-03-09T18:45:47.873205+0000 mgr.y (mgr.44107) 178 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: cephadm 2026-03-09T18:45:47.873205+0000 mgr.y (mgr.44107) 178 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:47.874295+0000 mon.c (mon.1) 189 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:47.874295+0000 mon.c (mon.1) 189 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:47.875323+0000 mon.c (mon.1) 190 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:47.875323+0000 mon.c (mon.1) 190 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: cephadm 2026-03-09T18:45:47.875791+0000 mgr.y (mgr.44107) 179 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: cephadm 2026-03-09T18:45:47.875791+0000 mgr.y (mgr.44107) 179 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:47.879204+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:47.879204+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:47.880764+0000 mon.c (mon.1) 191 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:47.880764+0000 mon.c (mon.1) 191 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: cephadm 2026-03-09T18:45:47.881294+0000 mgr.y (mgr.44107) 180 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: cephadm 2026-03-09T18:45:47.881294+0000 mgr.y (mgr.44107) 180 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:47.884027+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:47.884027+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:47.885586+0000 mon.c (mon.1) 192 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:47.885586+0000 mon.c (mon.1) 192 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: cephadm 2026-03-09T18:45:47.886070+0000 mgr.y (mgr.44107) 181 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: cephadm 2026-03-09T18:45:47.886070+0000 mgr.y (mgr.44107) 181 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:47.889239+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:47.889239+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:47.890772+0000 mon.c (mon.1) 193 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T18:45:48.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:47.890772+0000 mon.c (mon.1) 193 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T18:45:48.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:47.891010+0000 mgr.y (mgr.44107) 182 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T18:45:48.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:47.891010+0000 mgr.y (mgr.44107) 182 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T18:45:48.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: cephadm 2026-03-09T18:45:47.891592+0000 mgr.y (mgr.44107) 183 : cephadm [INF] Upgrade: osd.0 is safe to restart 2026-03-09T18:45:48.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: cephadm 2026-03-09T18:45:47.891592+0000 mgr.y (mgr.44107) 183 : cephadm [INF] Upgrade: osd.0 is safe to restart 2026-03-09T18:45:48.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:47.905900+0000 mgr.y (mgr.44107) 184 : audit [DBG] from='client.54309 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:48.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:47.905900+0000 mgr.y (mgr.44107) 184 : audit [DBG] from='client.54309 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:48.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: cluster 2026-03-09T18:45:48.075237+0000 mgr.y (mgr.44107) 185 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:48.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: cluster 2026-03-09T18:45:48.075237+0000 mgr.y (mgr.44107) 185 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:48.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:48.099218+0000 mon.c (mon.1) 194 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:45:48.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:48.099218+0000 mon.c (mon.1) 194 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:45:48.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: cephadm 2026-03-09T18:45:48.321909+0000 mgr.y (mgr.44107) 186 : cephadm [INF] Upgrade: Updating osd.0 2026-03-09T18:45:48.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: cephadm 2026-03-09T18:45:48.321909+0000 mgr.y (mgr.44107) 186 : cephadm [INF] Upgrade: Updating osd.0 2026-03-09T18:45:48.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:48.326646+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:48.326646+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:48.328615+0000 mon.c (mon.1) 195 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T18:45:48.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:48.328615+0000 mon.c (mon.1) 195 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T18:45:48.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:48.329163+0000 mon.c (mon.1) 196 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:48.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: audit 2026-03-09T18:45:48.329163+0000 mon.c (mon.1) 196 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:48.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: cephadm 2026-03-09T18:45:48.330832+0000 mgr.y (mgr.44107) 187 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-09T18:45:48.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:48 vm00 bash[69512]: cephadm 2026-03-09T18:45:48.330832+0000 mgr.y (mgr.44107) 187 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-09T18:45:48.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:46.982898+0000 mgr.y (mgr.44107) 174 : audit [DBG] from='client.54288 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:48.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:46.982898+0000 mgr.y (mgr.44107) 174 : audit [DBG] from='client.54288 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:48.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:47.183900+0000 mgr.y (mgr.44107) 175 : audit [DBG] from='client.54294 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:48.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:47.183900+0000 mgr.y (mgr.44107) 175 : audit [DBG] from='client.54294 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:47.384713+0000 mgr.y (mgr.44107) 176 : audit [DBG] from='client.54297 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:47.384713+0000 mgr.y (mgr.44107) 176 : audit [DBG] from='client.54297 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:47.634352+0000 mon.a (mon.0) 257 : audit [DBG] from='client.? 192.168.123.100:0/1915848130' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:47.634352+0000 mon.a (mon.0) 257 : audit [DBG] from='client.? 192.168.123.100:0/1915848130' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:47.870380+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:47.870380+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: cephadm 2026-03-09T18:45:47.873178+0000 mgr.y (mgr.44107) 177 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: cephadm 2026-03-09T18:45:47.873178+0000 mgr.y (mgr.44107) 177 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: cephadm 2026-03-09T18:45:47.873205+0000 mgr.y (mgr.44107) 178 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: cephadm 2026-03-09T18:45:47.873205+0000 mgr.y (mgr.44107) 178 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:47.874295+0000 mon.c (mon.1) 189 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:47.874295+0000 mon.c (mon.1) 189 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:47.875323+0000 mon.c (mon.1) 190 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:47.875323+0000 mon.c (mon.1) 190 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: cephadm 2026-03-09T18:45:47.875791+0000 mgr.y (mgr.44107) 179 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: cephadm 2026-03-09T18:45:47.875791+0000 mgr.y (mgr.44107) 179 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:47.879204+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:47.879204+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:47.880764+0000 mon.c (mon.1) 191 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:47.880764+0000 mon.c (mon.1) 191 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: cephadm 2026-03-09T18:45:47.881294+0000 mgr.y (mgr.44107) 180 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: cephadm 2026-03-09T18:45:47.881294+0000 mgr.y (mgr.44107) 180 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:47.884027+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:47.884027+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:47.885586+0000 mon.c (mon.1) 192 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:47.885586+0000 mon.c (mon.1) 192 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: cephadm 2026-03-09T18:45:47.886070+0000 mgr.y (mgr.44107) 181 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: cephadm 2026-03-09T18:45:47.886070+0000 mgr.y (mgr.44107) 181 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:47.889239+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:47.889239+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:47.890772+0000 mon.c (mon.1) 193 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:47.890772+0000 mon.c (mon.1) 193 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:47.891010+0000 mgr.y (mgr.44107) 182 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:47.891010+0000 mgr.y (mgr.44107) 182 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: cephadm 2026-03-09T18:45:47.891592+0000 mgr.y (mgr.44107) 183 : cephadm [INF] Upgrade: osd.0 is safe to restart 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: cephadm 2026-03-09T18:45:47.891592+0000 mgr.y (mgr.44107) 183 : cephadm [INF] Upgrade: osd.0 is safe to restart 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:47.905900+0000 mgr.y (mgr.44107) 184 : audit [DBG] from='client.54309 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:47.905900+0000 mgr.y (mgr.44107) 184 : audit [DBG] from='client.54309 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: cluster 2026-03-09T18:45:48.075237+0000 mgr.y (mgr.44107) 185 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: cluster 2026-03-09T18:45:48.075237+0000 mgr.y (mgr.44107) 185 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:48.099218+0000 mon.c (mon.1) 194 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:45:48.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:48.099218+0000 mon.c (mon.1) 194 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:45:48.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: cephadm 2026-03-09T18:45:48.321909+0000 mgr.y (mgr.44107) 186 : cephadm [INF] Upgrade: Updating osd.0 2026-03-09T18:45:48.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: cephadm 2026-03-09T18:45:48.321909+0000 mgr.y (mgr.44107) 186 : cephadm [INF] Upgrade: Updating osd.0 2026-03-09T18:45:48.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:48.326646+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:48.326646+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:48.328615+0000 mon.c (mon.1) 195 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T18:45:48.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:48.328615+0000 mon.c (mon.1) 195 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T18:45:48.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:48.329163+0000 mon.c (mon.1) 196 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:48.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: audit 2026-03-09T18:45:48.329163+0000 mon.c (mon.1) 196 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:48.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: cephadm 2026-03-09T18:45:48.330832+0000 mgr.y (mgr.44107) 187 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-09T18:45:48.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:48 vm08 bash[46122]: cephadm 2026-03-09T18:45:48.330832+0000 mgr.y (mgr.44107) 187 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-09T18:45:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:46.982898+0000 mgr.y (mgr.44107) 174 : audit [DBG] from='client.54288 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:46.982898+0000 mgr.y (mgr.44107) 174 : audit [DBG] from='client.54288 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:47.183900+0000 mgr.y (mgr.44107) 175 : audit [DBG] from='client.54294 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:47.183900+0000 mgr.y (mgr.44107) 175 : audit [DBG] from='client.54294 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:47.384713+0000 mgr.y (mgr.44107) 176 : audit [DBG] from='client.54297 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:47.384713+0000 mgr.y (mgr.44107) 176 : audit [DBG] from='client.54297 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:47.634352+0000 mon.a (mon.0) 257 : audit [DBG] from='client.? 192.168.123.100:0/1915848130' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:47.634352+0000 mon.a (mon.0) 257 : audit [DBG] from='client.? 192.168.123.100:0/1915848130' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:47.870380+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:47.870380+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: cephadm 2026-03-09T18:45:47.873178+0000 mgr.y (mgr.44107) 177 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:45:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: cephadm 2026-03-09T18:45:47.873178+0000 mgr.y (mgr.44107) 177 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:45:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: cephadm 2026-03-09T18:45:47.873205+0000 mgr.y (mgr.44107) 178 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:45:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: cephadm 2026-03-09T18:45:47.873205+0000 mgr.y (mgr.44107) 178 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:45:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:47.874295+0000 mon.c (mon.1) 189 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:48.788 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:47.874295+0000 mon.c (mon.1) 189 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:47.875323+0000 mon.c (mon.1) 190 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:47.875323+0000 mon.c (mon.1) 190 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: cephadm 2026-03-09T18:45:47.875791+0000 mgr.y (mgr.44107) 179 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: cephadm 2026-03-09T18:45:47.875791+0000 mgr.y (mgr.44107) 179 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:47.879204+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:47.879204+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:47.880764+0000 mon.c (mon.1) 191 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:47.880764+0000 mon.c (mon.1) 191 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: cephadm 2026-03-09T18:45:47.881294+0000 mgr.y (mgr.44107) 180 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: cephadm 2026-03-09T18:45:47.881294+0000 mgr.y (mgr.44107) 180 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:47.884027+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:47.884027+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:47.885586+0000 mon.c (mon.1) 192 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:47.885586+0000 mon.c (mon.1) 192 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: cephadm 2026-03-09T18:45:47.886070+0000 mgr.y (mgr.44107) 181 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: cephadm 2026-03-09T18:45:47.886070+0000 mgr.y (mgr.44107) 181 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:47.889239+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:47.889239+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:47.890772+0000 mon.c (mon.1) 193 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:47.890772+0000 mon.c (mon.1) 193 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:47.891010+0000 mgr.y (mgr.44107) 182 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:47.891010+0000 mgr.y (mgr.44107) 182 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: cephadm 2026-03-09T18:45:47.891592+0000 mgr.y (mgr.44107) 183 : cephadm [INF] Upgrade: osd.0 is safe to restart 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: cephadm 2026-03-09T18:45:47.891592+0000 mgr.y (mgr.44107) 183 : cephadm [INF] Upgrade: osd.0 is safe to restart 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:47.905900+0000 mgr.y (mgr.44107) 184 : audit [DBG] from='client.54309 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:47.905900+0000 mgr.y (mgr.44107) 184 : audit [DBG] from='client.54309 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: cluster 2026-03-09T18:45:48.075237+0000 mgr.y (mgr.44107) 185 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: cluster 2026-03-09T18:45:48.075237+0000 mgr.y (mgr.44107) 185 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:48.099218+0000 mon.c (mon.1) 194 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:48.099218+0000 mon.c (mon.1) 194 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: cephadm 2026-03-09T18:45:48.321909+0000 mgr.y (mgr.44107) 186 : cephadm [INF] Upgrade: Updating osd.0 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: cephadm 2026-03-09T18:45:48.321909+0000 mgr.y (mgr.44107) 186 : cephadm [INF] Upgrade: Updating osd.0 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:48.326646+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:48.326646+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:48.328615+0000 mon.c (mon.1) 195 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:48.328615+0000 mon.c (mon.1) 195 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:48.329163+0000 mon.c (mon.1) 196 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: audit 2026-03-09T18:45:48.329163+0000 mon.c (mon.1) 196 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: cephadm 2026-03-09T18:45:48.330832+0000 mgr.y (mgr.44107) 187 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-09T18:45:48.789 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:48 vm00 bash[65531]: cephadm 2026-03-09T18:45:48.330832+0000 mgr.y (mgr.44107) 187 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-09T18:45:49.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:49 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:45:49.379 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:45:49 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:45:49.379 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:45:49 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:45:49.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:49 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:45:49.379 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:45:49 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:45:49.379 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:45:49 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:45:49.379 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:45:49 vm00 systemd[1]: Stopping Ceph osd.0 for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:45:49.379 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:45:49 vm00 bash[25170]: debug 2026-03-09T18:45:49.216+0000 7fcb2e736700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T18:45:49.379 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:45:49 vm00 bash[25170]: debug 2026-03-09T18:45:49.216+0000 7fcb2e736700 -1 osd.0 111 *** Got signal Terminated *** 2026-03-09T18:45:49.379 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:45:49 vm00 bash[25170]: debug 2026-03-09T18:45:49.216+0000 7fcb2e736700 -1 osd.0 111 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:45:49.379 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:45:49 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:45:49.379 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:45:49 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:45:49.379 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:45:49 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:45:49.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:45:49 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:45:49] "GET /metrics HTTP/1.1" 200 37799 "" "Prometheus/2.51.0" 2026-03-09T18:45:50.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:49 vm08 bash[46122]: cluster 2026-03-09T18:45:49.219191+0000 mon.a (mon.0) 263 : cluster [INF] osd.0 marked itself down and dead 2026-03-09T18:45:50.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:49 vm08 bash[46122]: cluster 2026-03-09T18:45:49.219191+0000 mon.a (mon.0) 263 : cluster [INF] osd.0 marked itself down and dead 2026-03-09T18:45:50.269 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:49 vm00 bash[69512]: cluster 2026-03-09T18:45:49.219191+0000 mon.a (mon.0) 263 : cluster [INF] osd.0 marked itself down and dead 2026-03-09T18:45:50.269 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:49 vm00 bash[69512]: cluster 2026-03-09T18:45:49.219191+0000 mon.a (mon.0) 263 : cluster [INF] osd.0 marked itself down and dead 2026-03-09T18:45:50.269 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:49 vm00 bash[65531]: cluster 2026-03-09T18:45:49.219191+0000 mon.a (mon.0) 263 : cluster [INF] osd.0 marked itself down and dead 2026-03-09T18:45:50.269 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:49 vm00 bash[65531]: cluster 2026-03-09T18:45:49.219191+0000 mon.a (mon.0) 263 : cluster [INF] osd.0 marked itself down and dead 2026-03-09T18:45:50.269 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:45:50 vm00 bash[87323]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-osd-0 2026-03-09T18:45:50.574 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:45:50 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:45:50.574 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:45:50 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:45:50.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:50 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:45:50.574 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:45:50 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:45:50.574 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:45:50 vm00 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.0.service: Deactivated successfully. 2026-03-09T18:45:50.574 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:45:50 vm00 systemd[1]: Stopped Ceph osd.0 for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:45:50.574 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:45:50 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:45:50.574 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:45:50 vm00 systemd[1]: Started Ceph osd.0 for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:45:50.574 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:45:50 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:45:50.574 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:45:50 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:45:50.574 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:45:50 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:45:50.574 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:50 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:45:50.878 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:45:50 vm00 bash[87540]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:45:50.878 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:45:50 vm00 bash[87540]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:45:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:50 vm08 bash[46122]: cluster 2026-03-09T18:45:49.891831+0000 mon.a (mon.0) 264 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:45:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:50 vm08 bash[46122]: cluster 2026-03-09T18:45:49.891831+0000 mon.a (mon.0) 264 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:45:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:50 vm08 bash[46122]: cluster 2026-03-09T18:45:49.910518+0000 mon.a (mon.0) 265 : cluster [DBG] osdmap e112: 8 total, 7 up, 8 in 2026-03-09T18:45:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:50 vm08 bash[46122]: cluster 2026-03-09T18:45:49.910518+0000 mon.a (mon.0) 265 : cluster [DBG] osdmap e112: 8 total, 7 up, 8 in 2026-03-09T18:45:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:50 vm08 bash[46122]: cluster 2026-03-09T18:45:50.075516+0000 mgr.y (mgr.44107) 188 : cluster [DBG] pgmap v75: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:45:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:50 vm08 bash[46122]: cluster 2026-03-09T18:45:50.075516+0000 mgr.y (mgr.44107) 188 : cluster [DBG] pgmap v75: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:45:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:50 vm08 bash[46122]: audit 2026-03-09T18:45:50.551835+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:50 vm08 bash[46122]: audit 2026-03-09T18:45:50.551835+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:50 vm08 bash[46122]: audit 2026-03-09T18:45:50.557848+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:50 vm08 bash[46122]: audit 2026-03-09T18:45:50.557848+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:50 vm08 bash[46122]: audit 2026-03-09T18:45:50.562884+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:50 vm08 bash[46122]: audit 2026-03-09T18:45:50.562884+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:50 vm08 bash[46122]: audit 2026-03-09T18:45:50.563720+0000 mon.c (mon.1) 197 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:51.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:50 vm08 bash[46122]: audit 2026-03-09T18:45:50.563720+0000 mon.c (mon.1) 197 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:51.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:50 vm00 bash[65531]: cluster 2026-03-09T18:45:49.891831+0000 mon.a (mon.0) 264 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:45:51.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:50 vm00 bash[65531]: cluster 2026-03-09T18:45:49.891831+0000 mon.a (mon.0) 264 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:45:51.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:50 vm00 bash[65531]: cluster 2026-03-09T18:45:49.910518+0000 mon.a (mon.0) 265 : cluster [DBG] osdmap e112: 8 total, 7 up, 8 in 2026-03-09T18:45:51.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:50 vm00 bash[65531]: cluster 2026-03-09T18:45:49.910518+0000 mon.a (mon.0) 265 : cluster [DBG] osdmap e112: 8 total, 7 up, 8 in 2026-03-09T18:45:51.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:50 vm00 bash[65531]: cluster 2026-03-09T18:45:50.075516+0000 mgr.y (mgr.44107) 188 : cluster [DBG] pgmap v75: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:45:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:50 vm00 bash[65531]: cluster 2026-03-09T18:45:50.075516+0000 mgr.y (mgr.44107) 188 : cluster [DBG] pgmap v75: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:45:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:50 vm00 bash[65531]: audit 2026-03-09T18:45:50.551835+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:50 vm00 bash[65531]: audit 2026-03-09T18:45:50.551835+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:50 vm00 bash[65531]: audit 2026-03-09T18:45:50.557848+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:50 vm00 bash[65531]: audit 2026-03-09T18:45:50.557848+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:50 vm00 bash[65531]: audit 2026-03-09T18:45:50.562884+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:50 vm00 bash[65531]: audit 2026-03-09T18:45:50.562884+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:50 vm00 bash[65531]: audit 2026-03-09T18:45:50.563720+0000 mon.c (mon.1) 197 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:51.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:50 vm00 bash[65531]: audit 2026-03-09T18:45:50.563720+0000 mon.c (mon.1) 197 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:50 vm00 bash[69512]: cluster 2026-03-09T18:45:49.891831+0000 mon.a (mon.0) 264 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:45:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:50 vm00 bash[69512]: cluster 2026-03-09T18:45:49.891831+0000 mon.a (mon.0) 264 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:45:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:50 vm00 bash[69512]: cluster 2026-03-09T18:45:49.910518+0000 mon.a (mon.0) 265 : cluster [DBG] osdmap e112: 8 total, 7 up, 8 in 2026-03-09T18:45:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:50 vm00 bash[69512]: cluster 2026-03-09T18:45:49.910518+0000 mon.a (mon.0) 265 : cluster [DBG] osdmap e112: 8 total, 7 up, 8 in 2026-03-09T18:45:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:50 vm00 bash[69512]: cluster 2026-03-09T18:45:50.075516+0000 mgr.y (mgr.44107) 188 : cluster [DBG] pgmap v75: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:45:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:50 vm00 bash[69512]: cluster 2026-03-09T18:45:50.075516+0000 mgr.y (mgr.44107) 188 : cluster [DBG] pgmap v75: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:45:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:50 vm00 bash[69512]: audit 2026-03-09T18:45:50.551835+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:50 vm00 bash[69512]: audit 2026-03-09T18:45:50.551835+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:50 vm00 bash[69512]: audit 2026-03-09T18:45:50.557848+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:50 vm00 bash[69512]: audit 2026-03-09T18:45:50.557848+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:50 vm00 bash[69512]: audit 2026-03-09T18:45:50.562884+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:50 vm00 bash[69512]: audit 2026-03-09T18:45:50.562884+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:50 vm00 bash[69512]: audit 2026-03-09T18:45:50.563720+0000 mon.c (mon.1) 197 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:51.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:50 vm00 bash[69512]: audit 2026-03-09T18:45:50.563720+0000 mon.c (mon.1) 197 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:45:51.878 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:45:51 vm00 bash[87540]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T18:45:51.879 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:45:51 vm00 bash[87540]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:45:51.879 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:45:51 vm00 bash[87540]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:45:51.879 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:45:51 vm00 bash[87540]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-09T18:45:51.879 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:45:51 vm00 bash[87540]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-f4f720d4-09a5-447d-9ee4-bb39c4949e84/osd-block-b0cac7d6-07bf-4b00-9243-24f6ec5bc470 --path /var/lib/ceph/osd/ceph-0 --no-mon-config 2026-03-09T18:45:52.160 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:51 vm00 bash[69512]: cluster 2026-03-09T18:45:50.907570+0000 mon.a (mon.0) 269 : cluster [DBG] osdmap e113: 8 total, 7 up, 8 in 2026-03-09T18:45:52.160 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:51 vm00 bash[69512]: cluster 2026-03-09T18:45:50.907570+0000 mon.a (mon.0) 269 : cluster [DBG] osdmap e113: 8 total, 7 up, 8 in 2026-03-09T18:45:52.161 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:51 vm00 bash[65531]: cluster 2026-03-09T18:45:50.907570+0000 mon.a (mon.0) 269 : cluster [DBG] osdmap e113: 8 total, 7 up, 8 in 2026-03-09T18:45:52.161 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:51 vm00 bash[65531]: cluster 2026-03-09T18:45:50.907570+0000 mon.a (mon.0) 269 : cluster [DBG] osdmap e113: 8 total, 7 up, 8 in 2026-03-09T18:45:52.161 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:45:52 vm00 bash[87540]: Running command: /usr/bin/ln -snf /dev/ceph-f4f720d4-09a5-447d-9ee4-bb39c4949e84/osd-block-b0cac7d6-07bf-4b00-9243-24f6ec5bc470 /var/lib/ceph/osd/ceph-0/block 2026-03-09T18:45:52.161 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:45:52 vm00 bash[87540]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block 2026-03-09T18:45:52.161 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:45:52 vm00 bash[87540]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 2026-03-09T18:45:52.161 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:45:52 vm00 bash[87540]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-09T18:45:52.161 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:45:52 vm00 bash[87540]: --> ceph-volume lvm activate successful for osd ID: 0 2026-03-09T18:45:52.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:51 vm08 bash[46122]: cluster 2026-03-09T18:45:50.907570+0000 mon.a (mon.0) 269 : cluster [DBG] osdmap e113: 8 total, 7 up, 8 in 2026-03-09T18:45:52.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:51 vm08 bash[46122]: cluster 2026-03-09T18:45:50.907570+0000 mon.a (mon.0) 269 : cluster [DBG] osdmap e113: 8 total, 7 up, 8 in 2026-03-09T18:45:52.628 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:45:52 vm00 bash[87898]: debug 2026-03-09T18:45:52.160+0000 7f80a17d1640 1 -- 192.168.123.100:0/360260039 <== mon.1 v2:192.168.123.100:3301/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x55c132721680 con 0x55c13192fc00 2026-03-09T18:45:53.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:52 vm08 bash[46122]: audit 2026-03-09T18:45:51.533591+0000 mgr.y (mgr.44107) 189 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:53.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:52 vm08 bash[46122]: audit 2026-03-09T18:45:51.533591+0000 mgr.y (mgr.44107) 189 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:53.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:52 vm08 bash[46122]: cluster 2026-03-09T18:45:52.075817+0000 mgr.y (mgr.44107) 190 : cluster [DBG] pgmap v77: 161 pgs: 12 peering, 19 stale+active+clean, 130 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:53.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:52 vm08 bash[46122]: cluster 2026-03-09T18:45:52.075817+0000 mgr.y (mgr.44107) 190 : cluster [DBG] pgmap v77: 161 pgs: 12 peering, 19 stale+active+clean, 130 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:53.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:52 vm00 bash[69512]: audit 2026-03-09T18:45:51.533591+0000 mgr.y (mgr.44107) 189 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:53.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:52 vm00 bash[69512]: audit 2026-03-09T18:45:51.533591+0000 mgr.y (mgr.44107) 189 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:53.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:52 vm00 bash[69512]: cluster 2026-03-09T18:45:52.075817+0000 mgr.y (mgr.44107) 190 : cluster [DBG] pgmap v77: 161 pgs: 12 peering, 19 stale+active+clean, 130 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:53.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:52 vm00 bash[69512]: cluster 2026-03-09T18:45:52.075817+0000 mgr.y (mgr.44107) 190 : cluster [DBG] pgmap v77: 161 pgs: 12 peering, 19 stale+active+clean, 130 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:53.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:52 vm00 bash[65531]: audit 2026-03-09T18:45:51.533591+0000 mgr.y (mgr.44107) 189 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:53.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:52 vm00 bash[65531]: audit 2026-03-09T18:45:51.533591+0000 mgr.y (mgr.44107) 189 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:45:53.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:52 vm00 bash[65531]: cluster 2026-03-09T18:45:52.075817+0000 mgr.y (mgr.44107) 190 : cluster [DBG] pgmap v77: 161 pgs: 12 peering, 19 stale+active+clean, 130 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:53.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:52 vm00 bash[65531]: cluster 2026-03-09T18:45:52.075817+0000 mgr.y (mgr.44107) 190 : cluster [DBG] pgmap v77: 161 pgs: 12 peering, 19 stale+active+clean, 130 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:45:53.379 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:45:53 vm00 bash[87898]: debug 2026-03-09T18:45:53.092+0000 7f80a403b740 -1 Falling back to public interface 2026-03-09T18:45:54.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:53 vm08 bash[46122]: cluster 2026-03-09T18:45:52.908967+0000 mon.a (mon.0) 270 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-09T18:45:54.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:53 vm08 bash[46122]: cluster 2026-03-09T18:45:52.908967+0000 mon.a (mon.0) 270 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-09T18:45:54.305 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:53 vm00 bash[69512]: cluster 2026-03-09T18:45:52.908967+0000 mon.a (mon.0) 270 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-09T18:45:54.305 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:53 vm00 bash[69512]: cluster 2026-03-09T18:45:52.908967+0000 mon.a (mon.0) 270 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-09T18:45:54.305 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:53 vm00 bash[65531]: cluster 2026-03-09T18:45:52.908967+0000 mon.a (mon.0) 270 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-09T18:45:54.305 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:53 vm00 bash[65531]: cluster 2026-03-09T18:45:52.908967+0000 mon.a (mon.0) 270 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-09T18:45:54.628 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:45:54 vm00 bash[87898]: debug 2026-03-09T18:45:54.308+0000 7f80a403b740 -1 osd.0 0 read_superblock omap replica is missing. 2026-03-09T18:45:54.628 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:45:54 vm00 bash[87898]: debug 2026-03-09T18:45:54.340+0000 7f80a403b740 -1 osd.0 111 log_to_monitors true 2026-03-09T18:45:55.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:54 vm08 bash[46122]: cluster 2026-03-09T18:45:54.076100+0000 mgr.y (mgr.44107) 191 : cluster [DBG] pgmap v78: 161 pgs: 5 active+undersized, 12 peering, 19 stale+active+clean, 1 active+undersized+degraded, 124 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s; 1/627 objects degraded (0.159%) 2026-03-09T18:45:55.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:54 vm08 bash[46122]: cluster 2026-03-09T18:45:54.076100+0000 mgr.y (mgr.44107) 191 : cluster [DBG] pgmap v78: 161 pgs: 5 active+undersized, 12 peering, 19 stale+active+clean, 1 active+undersized+degraded, 124 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s; 1/627 objects degraded (0.159%) 2026-03-09T18:45:55.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:54 vm08 bash[46122]: audit 2026-03-09T18:45:54.343268+0000 mon.b (mon.2) 14 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2285339548,v1:192.168.123.100:6803/2285339548]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T18:45:55.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:54 vm08 bash[46122]: audit 2026-03-09T18:45:54.343268+0000 mon.b (mon.2) 14 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2285339548,v1:192.168.123.100:6803/2285339548]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T18:45:55.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:54 vm08 bash[46122]: audit 2026-03-09T18:45:54.346586+0000 mon.a (mon.0) 271 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T18:45:55.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:54 vm08 bash[46122]: audit 2026-03-09T18:45:54.346586+0000 mon.a (mon.0) 271 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T18:45:55.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:54 vm00 bash[69512]: cluster 2026-03-09T18:45:54.076100+0000 mgr.y (mgr.44107) 191 : cluster [DBG] pgmap v78: 161 pgs: 5 active+undersized, 12 peering, 19 stale+active+clean, 1 active+undersized+degraded, 124 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s; 1/627 objects degraded (0.159%) 2026-03-09T18:45:55.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:54 vm00 bash[69512]: cluster 2026-03-09T18:45:54.076100+0000 mgr.y (mgr.44107) 191 : cluster [DBG] pgmap v78: 161 pgs: 5 active+undersized, 12 peering, 19 stale+active+clean, 1 active+undersized+degraded, 124 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s; 1/627 objects degraded (0.159%) 2026-03-09T18:45:55.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:54 vm00 bash[69512]: audit 2026-03-09T18:45:54.343268+0000 mon.b (mon.2) 14 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2285339548,v1:192.168.123.100:6803/2285339548]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T18:45:55.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:54 vm00 bash[69512]: audit 2026-03-09T18:45:54.343268+0000 mon.b (mon.2) 14 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2285339548,v1:192.168.123.100:6803/2285339548]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T18:45:55.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:54 vm00 bash[69512]: audit 2026-03-09T18:45:54.346586+0000 mon.a (mon.0) 271 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T18:45:55.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:54 vm00 bash[69512]: audit 2026-03-09T18:45:54.346586+0000 mon.a (mon.0) 271 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T18:45:55.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:54 vm00 bash[65531]: cluster 2026-03-09T18:45:54.076100+0000 mgr.y (mgr.44107) 191 : cluster [DBG] pgmap v78: 161 pgs: 5 active+undersized, 12 peering, 19 stale+active+clean, 1 active+undersized+degraded, 124 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s; 1/627 objects degraded (0.159%) 2026-03-09T18:45:55.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:54 vm00 bash[65531]: cluster 2026-03-09T18:45:54.076100+0000 mgr.y (mgr.44107) 191 : cluster [DBG] pgmap v78: 161 pgs: 5 active+undersized, 12 peering, 19 stale+active+clean, 1 active+undersized+degraded, 124 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s; 1/627 objects degraded (0.159%) 2026-03-09T18:45:55.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:54 vm00 bash[65531]: audit 2026-03-09T18:45:54.343268+0000 mon.b (mon.2) 14 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2285339548,v1:192.168.123.100:6803/2285339548]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T18:45:55.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:54 vm00 bash[65531]: audit 2026-03-09T18:45:54.343268+0000 mon.b (mon.2) 14 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2285339548,v1:192.168.123.100:6803/2285339548]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T18:45:55.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:54 vm00 bash[65531]: audit 2026-03-09T18:45:54.346586+0000 mon.a (mon.0) 271 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T18:45:55.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:54 vm00 bash[65531]: audit 2026-03-09T18:45:54.346586+0000 mon.a (mon.0) 271 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T18:45:55.379 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:45:54 vm00 bash[87898]: debug 2026-03-09T18:45:54.960+0000 7f809bde6640 -1 osd.0 111 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:45:56.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:55 vm08 bash[46122]: cluster 2026-03-09T18:45:54.919139+0000 mon.a (mon.0) 272 : cluster [WRN] Health check failed: Degraded data redundancy: 1/627 objects degraded (0.159%), 1 pg degraded (PG_DEGRADED) 2026-03-09T18:45:56.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:55 vm08 bash[46122]: cluster 2026-03-09T18:45:54.919139+0000 mon.a (mon.0) 272 : cluster [WRN] Health check failed: Degraded data redundancy: 1/627 objects degraded (0.159%), 1 pg degraded (PG_DEGRADED) 2026-03-09T18:45:56.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:55 vm08 bash[46122]: audit 2026-03-09T18:45:54.926633+0000 mon.a (mon.0) 273 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T18:45:56.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:55 vm08 bash[46122]: audit 2026-03-09T18:45:54.926633+0000 mon.a (mon.0) 273 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T18:45:56.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:55 vm08 bash[46122]: cluster 2026-03-09T18:45:54.928955+0000 mon.a (mon.0) 274 : cluster [DBG] osdmap e114: 8 total, 7 up, 8 in 2026-03-09T18:45:56.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:55 vm08 bash[46122]: cluster 2026-03-09T18:45:54.928955+0000 mon.a (mon.0) 274 : cluster [DBG] osdmap e114: 8 total, 7 up, 8 in 2026-03-09T18:45:56.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:55 vm08 bash[46122]: audit 2026-03-09T18:45:54.932601+0000 mon.b (mon.2) 15 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2285339548,v1:192.168.123.100:6803/2285339548]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:45:56.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:55 vm08 bash[46122]: audit 2026-03-09T18:45:54.932601+0000 mon.b (mon.2) 15 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2285339548,v1:192.168.123.100:6803/2285339548]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:45:56.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:55 vm08 bash[46122]: audit 2026-03-09T18:45:54.935870+0000 mon.a (mon.0) 275 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:45:56.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:55 vm08 bash[46122]: audit 2026-03-09T18:45:54.935870+0000 mon.a (mon.0) 275 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:45:56.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:55 vm00 bash[65531]: cluster 2026-03-09T18:45:54.919139+0000 mon.a (mon.0) 272 : cluster [WRN] Health check failed: Degraded data redundancy: 1/627 objects degraded (0.159%), 1 pg degraded (PG_DEGRADED) 2026-03-09T18:45:56.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:55 vm00 bash[65531]: cluster 2026-03-09T18:45:54.919139+0000 mon.a (mon.0) 272 : cluster [WRN] Health check failed: Degraded data redundancy: 1/627 objects degraded (0.159%), 1 pg degraded (PG_DEGRADED) 2026-03-09T18:45:56.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:55 vm00 bash[65531]: audit 2026-03-09T18:45:54.926633+0000 mon.a (mon.0) 273 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T18:45:56.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:55 vm00 bash[65531]: audit 2026-03-09T18:45:54.926633+0000 mon.a (mon.0) 273 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T18:45:56.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:55 vm00 bash[65531]: cluster 2026-03-09T18:45:54.928955+0000 mon.a (mon.0) 274 : cluster [DBG] osdmap e114: 8 total, 7 up, 8 in 2026-03-09T18:45:56.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:55 vm00 bash[65531]: cluster 2026-03-09T18:45:54.928955+0000 mon.a (mon.0) 274 : cluster [DBG] osdmap e114: 8 total, 7 up, 8 in 2026-03-09T18:45:56.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:55 vm00 bash[65531]: audit 2026-03-09T18:45:54.932601+0000 mon.b (mon.2) 15 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2285339548,v1:192.168.123.100:6803/2285339548]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:45:56.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:55 vm00 bash[65531]: audit 2026-03-09T18:45:54.932601+0000 mon.b (mon.2) 15 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2285339548,v1:192.168.123.100:6803/2285339548]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:45:56.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:55 vm00 bash[65531]: audit 2026-03-09T18:45:54.935870+0000 mon.a (mon.0) 275 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:45:56.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:55 vm00 bash[65531]: audit 2026-03-09T18:45:54.935870+0000 mon.a (mon.0) 275 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:45:56.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:55 vm00 bash[69512]: cluster 2026-03-09T18:45:54.919139+0000 mon.a (mon.0) 272 : cluster [WRN] Health check failed: Degraded data redundancy: 1/627 objects degraded (0.159%), 1 pg degraded (PG_DEGRADED) 2026-03-09T18:45:56.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:55 vm00 bash[69512]: cluster 2026-03-09T18:45:54.919139+0000 mon.a (mon.0) 272 : cluster [WRN] Health check failed: Degraded data redundancy: 1/627 objects degraded (0.159%), 1 pg degraded (PG_DEGRADED) 2026-03-09T18:45:56.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:55 vm00 bash[69512]: audit 2026-03-09T18:45:54.926633+0000 mon.a (mon.0) 273 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T18:45:56.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:55 vm00 bash[69512]: audit 2026-03-09T18:45:54.926633+0000 mon.a (mon.0) 273 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T18:45:56.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:55 vm00 bash[69512]: cluster 2026-03-09T18:45:54.928955+0000 mon.a (mon.0) 274 : cluster [DBG] osdmap e114: 8 total, 7 up, 8 in 2026-03-09T18:45:56.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:55 vm00 bash[69512]: cluster 2026-03-09T18:45:54.928955+0000 mon.a (mon.0) 274 : cluster [DBG] osdmap e114: 8 total, 7 up, 8 in 2026-03-09T18:45:56.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:55 vm00 bash[69512]: audit 2026-03-09T18:45:54.932601+0000 mon.b (mon.2) 15 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2285339548,v1:192.168.123.100:6803/2285339548]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:45:56.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:55 vm00 bash[69512]: audit 2026-03-09T18:45:54.932601+0000 mon.b (mon.2) 15 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2285339548,v1:192.168.123.100:6803/2285339548]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:45:56.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:55 vm00 bash[69512]: audit 2026-03-09T18:45:54.935870+0000 mon.a (mon.0) 275 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:45:56.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:55 vm00 bash[69512]: audit 2026-03-09T18:45:54.935870+0000 mon.a (mon.0) 275 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:45:57.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:56 vm08 bash[46122]: cluster 2026-03-09T18:45:55.927105+0000 mon.a (mon.0) 276 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:45:57.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:56 vm08 bash[46122]: cluster 2026-03-09T18:45:55.927105+0000 mon.a (mon.0) 276 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:45:57.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:56 vm08 bash[46122]: cluster 2026-03-09T18:45:55.945189+0000 mon.a (mon.0) 277 : cluster [INF] osd.0 [v2:192.168.123.100:6802/2285339548,v1:192.168.123.100:6803/2285339548] boot 2026-03-09T18:45:57.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:56 vm08 bash[46122]: cluster 2026-03-09T18:45:55.945189+0000 mon.a (mon.0) 277 : cluster [INF] osd.0 [v2:192.168.123.100:6802/2285339548,v1:192.168.123.100:6803/2285339548] boot 2026-03-09T18:45:57.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:56 vm08 bash[46122]: cluster 2026-03-09T18:45:55.945213+0000 mon.a (mon.0) 278 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-09T18:45:57.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:56 vm08 bash[46122]: cluster 2026-03-09T18:45:55.945213+0000 mon.a (mon.0) 278 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-09T18:45:57.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:56 vm08 bash[46122]: audit 2026-03-09T18:45:55.947743+0000 mon.c (mon.1) 198 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:45:57.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:56 vm08 bash[46122]: audit 2026-03-09T18:45:55.947743+0000 mon.c (mon.1) 198 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:45:57.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:56 vm08 bash[46122]: cluster 2026-03-09T18:45:56.076416+0000 mgr.y (mgr.44107) 192 : cluster [DBG] pgmap v81: 161 pgs: 37 active+undersized, 21 active+undersized+degraded, 103 active+clean; 457 KiB data, 165 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s; 70/627 objects degraded (11.164%) 2026-03-09T18:45:57.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:56 vm08 bash[46122]: cluster 2026-03-09T18:45:56.076416+0000 mgr.y (mgr.44107) 192 : cluster [DBG] pgmap v81: 161 pgs: 37 active+undersized, 21 active+undersized+degraded, 103 active+clean; 457 KiB data, 165 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s; 70/627 objects degraded (11.164%) 2026-03-09T18:45:57.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:56 vm00 bash[69512]: cluster 2026-03-09T18:45:55.927105+0000 mon.a (mon.0) 276 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:45:57.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:56 vm00 bash[69512]: cluster 2026-03-09T18:45:55.927105+0000 mon.a (mon.0) 276 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:45:57.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:56 vm00 bash[69512]: cluster 2026-03-09T18:45:55.945189+0000 mon.a (mon.0) 277 : cluster [INF] osd.0 [v2:192.168.123.100:6802/2285339548,v1:192.168.123.100:6803/2285339548] boot 2026-03-09T18:45:57.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:56 vm00 bash[69512]: cluster 2026-03-09T18:45:55.945189+0000 mon.a (mon.0) 277 : cluster [INF] osd.0 [v2:192.168.123.100:6802/2285339548,v1:192.168.123.100:6803/2285339548] boot 2026-03-09T18:45:57.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:56 vm00 bash[69512]: cluster 2026-03-09T18:45:55.945213+0000 mon.a (mon.0) 278 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-09T18:45:57.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:56 vm00 bash[69512]: cluster 2026-03-09T18:45:55.945213+0000 mon.a (mon.0) 278 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-09T18:45:57.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:56 vm00 bash[69512]: audit 2026-03-09T18:45:55.947743+0000 mon.c (mon.1) 198 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:45:57.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:56 vm00 bash[69512]: audit 2026-03-09T18:45:55.947743+0000 mon.c (mon.1) 198 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:45:57.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:56 vm00 bash[69512]: cluster 2026-03-09T18:45:56.076416+0000 mgr.y (mgr.44107) 192 : cluster [DBG] pgmap v81: 161 pgs: 37 active+undersized, 21 active+undersized+degraded, 103 active+clean; 457 KiB data, 165 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s; 70/627 objects degraded (11.164%) 2026-03-09T18:45:57.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:56 vm00 bash[69512]: cluster 2026-03-09T18:45:56.076416+0000 mgr.y (mgr.44107) 192 : cluster [DBG] pgmap v81: 161 pgs: 37 active+undersized, 21 active+undersized+degraded, 103 active+clean; 457 KiB data, 165 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s; 70/627 objects degraded (11.164%) 2026-03-09T18:45:57.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:56 vm00 bash[65531]: cluster 2026-03-09T18:45:55.927105+0000 mon.a (mon.0) 276 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:45:57.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:56 vm00 bash[65531]: cluster 2026-03-09T18:45:55.927105+0000 mon.a (mon.0) 276 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:45:57.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:56 vm00 bash[65531]: cluster 2026-03-09T18:45:55.945189+0000 mon.a (mon.0) 277 : cluster [INF] osd.0 [v2:192.168.123.100:6802/2285339548,v1:192.168.123.100:6803/2285339548] boot 2026-03-09T18:45:57.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:56 vm00 bash[65531]: cluster 2026-03-09T18:45:55.945189+0000 mon.a (mon.0) 277 : cluster [INF] osd.0 [v2:192.168.123.100:6802/2285339548,v1:192.168.123.100:6803/2285339548] boot 2026-03-09T18:45:57.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:56 vm00 bash[65531]: cluster 2026-03-09T18:45:55.945213+0000 mon.a (mon.0) 278 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-09T18:45:57.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:56 vm00 bash[65531]: cluster 2026-03-09T18:45:55.945213+0000 mon.a (mon.0) 278 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-09T18:45:57.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:56 vm00 bash[65531]: audit 2026-03-09T18:45:55.947743+0000 mon.c (mon.1) 198 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:45:57.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:56 vm00 bash[65531]: audit 2026-03-09T18:45:55.947743+0000 mon.c (mon.1) 198 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:45:57.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:56 vm00 bash[65531]: cluster 2026-03-09T18:45:56.076416+0000 mgr.y (mgr.44107) 192 : cluster [DBG] pgmap v81: 161 pgs: 37 active+undersized, 21 active+undersized+degraded, 103 active+clean; 457 KiB data, 165 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s; 70/627 objects degraded (11.164%) 2026-03-09T18:45:57.380 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:56 vm00 bash[65531]: cluster 2026-03-09T18:45:56.076416+0000 mgr.y (mgr.44107) 192 : cluster [DBG] pgmap v81: 161 pgs: 37 active+undersized, 21 active+undersized+degraded, 103 active+clean; 457 KiB data, 165 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s; 70/627 objects degraded (11.164%) 2026-03-09T18:45:58.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:57 vm08 bash[46122]: cluster 2026-03-09T18:45:56.932190+0000 mon.a (mon.0) 279 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-09T18:45:58.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:57 vm08 bash[46122]: cluster 2026-03-09T18:45:56.932190+0000 mon.a (mon.0) 279 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-09T18:45:58.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:57 vm08 bash[46122]: cluster 2026-03-09T18:45:56.960205+0000 mon.a (mon.0) 280 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T18:45:58.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:57 vm08 bash[46122]: cluster 2026-03-09T18:45:56.960205+0000 mon.a (mon.0) 280 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T18:45:58.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:57 vm08 bash[46122]: audit 2026-03-09T18:45:57.037026+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:58.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:57 vm08 bash[46122]: audit 2026-03-09T18:45:57.037026+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:58.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:57 vm08 bash[46122]: audit 2026-03-09T18:45:57.044762+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:58.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:57 vm08 bash[46122]: audit 2026-03-09T18:45:57.044762+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:58.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:57 vm08 bash[46122]: audit 2026-03-09T18:45:57.595755+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:58.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:57 vm08 bash[46122]: audit 2026-03-09T18:45:57.595755+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:58.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:57 vm08 bash[46122]: audit 2026-03-09T18:45:57.603659+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:58.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:57 vm08 bash[46122]: audit 2026-03-09T18:45:57.603659+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:58.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:57 vm00 bash[65531]: cluster 2026-03-09T18:45:56.932190+0000 mon.a (mon.0) 279 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-09T18:45:58.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:57 vm00 bash[65531]: cluster 2026-03-09T18:45:56.932190+0000 mon.a (mon.0) 279 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-09T18:45:58.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:57 vm00 bash[65531]: cluster 2026-03-09T18:45:56.960205+0000 mon.a (mon.0) 280 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T18:45:58.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:57 vm00 bash[65531]: cluster 2026-03-09T18:45:56.960205+0000 mon.a (mon.0) 280 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T18:45:58.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:57 vm00 bash[65531]: audit 2026-03-09T18:45:57.037026+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:58.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:57 vm00 bash[65531]: audit 2026-03-09T18:45:57.037026+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:58.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:57 vm00 bash[65531]: audit 2026-03-09T18:45:57.044762+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:58.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:57 vm00 bash[65531]: audit 2026-03-09T18:45:57.044762+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:58.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:57 vm00 bash[65531]: audit 2026-03-09T18:45:57.595755+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:58.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:57 vm00 bash[65531]: audit 2026-03-09T18:45:57.595755+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:58.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:57 vm00 bash[65531]: audit 2026-03-09T18:45:57.603659+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:58.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:57 vm00 bash[65531]: audit 2026-03-09T18:45:57.603659+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:58.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:57 vm00 bash[69512]: cluster 2026-03-09T18:45:56.932190+0000 mon.a (mon.0) 279 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-09T18:45:58.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:57 vm00 bash[69512]: cluster 2026-03-09T18:45:56.932190+0000 mon.a (mon.0) 279 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-09T18:45:58.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:57 vm00 bash[69512]: cluster 2026-03-09T18:45:56.960205+0000 mon.a (mon.0) 280 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T18:45:58.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:57 vm00 bash[69512]: cluster 2026-03-09T18:45:56.960205+0000 mon.a (mon.0) 280 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T18:45:58.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:57 vm00 bash[69512]: audit 2026-03-09T18:45:57.037026+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:58.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:57 vm00 bash[69512]: audit 2026-03-09T18:45:57.037026+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:58.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:57 vm00 bash[69512]: audit 2026-03-09T18:45:57.044762+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:58.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:57 vm00 bash[69512]: audit 2026-03-09T18:45:57.044762+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:58.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:57 vm00 bash[69512]: audit 2026-03-09T18:45:57.595755+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:58.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:57 vm00 bash[69512]: audit 2026-03-09T18:45:57.595755+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:58.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:57 vm00 bash[69512]: audit 2026-03-09T18:45:57.603659+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:58.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:57 vm00 bash[69512]: audit 2026-03-09T18:45:57.603659+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:45:59.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:58 vm08 bash[46122]: cluster 2026-03-09T18:45:58.076750+0000 mgr.y (mgr.44107) 193 : cluster [DBG] pgmap v83: 161 pgs: 32 active+undersized, 20 active+undersized+degraded, 109 active+clean; 457 KiB data, 165 MiB used, 160 GiB / 160 GiB avail; 69/627 objects degraded (11.005%) 2026-03-09T18:45:59.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:45:58 vm08 bash[46122]: cluster 2026-03-09T18:45:58.076750+0000 mgr.y (mgr.44107) 193 : cluster [DBG] pgmap v83: 161 pgs: 32 active+undersized, 20 active+undersized+degraded, 109 active+clean; 457 KiB data, 165 MiB used, 160 GiB / 160 GiB avail; 69/627 objects degraded (11.005%) 2026-03-09T18:45:59.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:58 vm00 bash[69512]: cluster 2026-03-09T18:45:58.076750+0000 mgr.y (mgr.44107) 193 : cluster [DBG] pgmap v83: 161 pgs: 32 active+undersized, 20 active+undersized+degraded, 109 active+clean; 457 KiB data, 165 MiB used, 160 GiB / 160 GiB avail; 69/627 objects degraded (11.005%) 2026-03-09T18:45:59.378 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:45:58 vm00 bash[69512]: cluster 2026-03-09T18:45:58.076750+0000 mgr.y (mgr.44107) 193 : cluster [DBG] pgmap v83: 161 pgs: 32 active+undersized, 20 active+undersized+degraded, 109 active+clean; 457 KiB data, 165 MiB used, 160 GiB / 160 GiB avail; 69/627 objects degraded (11.005%) 2026-03-09T18:45:59.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:58 vm00 bash[65531]: cluster 2026-03-09T18:45:58.076750+0000 mgr.y (mgr.44107) 193 : cluster [DBG] pgmap v83: 161 pgs: 32 active+undersized, 20 active+undersized+degraded, 109 active+clean; 457 KiB data, 165 MiB used, 160 GiB / 160 GiB avail; 69/627 objects degraded (11.005%) 2026-03-09T18:45:59.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:45:58 vm00 bash[65531]: cluster 2026-03-09T18:45:58.076750+0000 mgr.y (mgr.44107) 193 : cluster [DBG] pgmap v83: 161 pgs: 32 active+undersized, 20 active+undersized+degraded, 109 active+clean; 457 KiB data, 165 MiB used, 160 GiB / 160 GiB avail; 69/627 objects degraded (11.005%) 2026-03-09T18:45:59.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:45:59 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:45:59] "GET /metrics HTTP/1.1" 200 37799 "" "Prometheus/2.51.0" 2026-03-09T18:46:00.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:00 vm08 bash[46122]: cluster 2026-03-09T18:46:00.166800+0000 mon.a (mon.0) 285 : cluster [WRN] Health check update: Degraded data redundancy: 15/627 objects degraded (2.392%), 5 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:00.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:00 vm08 bash[46122]: cluster 2026-03-09T18:46:00.166800+0000 mon.a (mon.0) 285 : cluster [WRN] Health check update: Degraded data redundancy: 15/627 objects degraded (2.392%), 5 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:00.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:00 vm00 bash[69512]: cluster 2026-03-09T18:46:00.166800+0000 mon.a (mon.0) 285 : cluster [WRN] Health check update: Degraded data redundancy: 15/627 objects degraded (2.392%), 5 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:00.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:00 vm00 bash[69512]: cluster 2026-03-09T18:46:00.166800+0000 mon.a (mon.0) 285 : cluster [WRN] Health check update: Degraded data redundancy: 15/627 objects degraded (2.392%), 5 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:00.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:00 vm00 bash[65531]: cluster 2026-03-09T18:46:00.166800+0000 mon.a (mon.0) 285 : cluster [WRN] Health check update: Degraded data redundancy: 15/627 objects degraded (2.392%), 5 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:00.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:00 vm00 bash[65531]: cluster 2026-03-09T18:46:00.166800+0000 mon.a (mon.0) 285 : cluster [WRN] Health check update: Degraded data redundancy: 15/627 objects degraded (2.392%), 5 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:01.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:01 vm08 bash[46122]: cluster 2026-03-09T18:46:00.077170+0000 mgr.y (mgr.44107) 194 : cluster [DBG] pgmap v84: 161 pgs: 5 active+undersized, 5 active+undersized+degraded, 151 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 15/627 objects degraded (2.392%) 2026-03-09T18:46:01.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:01 vm08 bash[46122]: cluster 2026-03-09T18:46:00.077170+0000 mgr.y (mgr.44107) 194 : cluster [DBG] pgmap v84: 161 pgs: 5 active+undersized, 5 active+undersized+degraded, 151 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 15/627 objects degraded (2.392%) 2026-03-09T18:46:01.534 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:01 vm00 bash[65531]: cluster 2026-03-09T18:46:00.077170+0000 mgr.y (mgr.44107) 194 : cluster [DBG] pgmap v84: 161 pgs: 5 active+undersized, 5 active+undersized+degraded, 151 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 15/627 objects degraded (2.392%) 2026-03-09T18:46:01.534 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:01 vm00 bash[65531]: cluster 2026-03-09T18:46:00.077170+0000 mgr.y (mgr.44107) 194 : cluster [DBG] pgmap v84: 161 pgs: 5 active+undersized, 5 active+undersized+degraded, 151 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 15/627 objects degraded (2.392%) 2026-03-09T18:46:01.534 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:01 vm00 bash[69512]: cluster 2026-03-09T18:46:00.077170+0000 mgr.y (mgr.44107) 194 : cluster [DBG] pgmap v84: 161 pgs: 5 active+undersized, 5 active+undersized+degraded, 151 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 15/627 objects degraded (2.392%) 2026-03-09T18:46:01.534 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:01 vm00 bash[69512]: cluster 2026-03-09T18:46:00.077170+0000 mgr.y (mgr.44107) 194 : cluster [DBG] pgmap v84: 161 pgs: 5 active+undersized, 5 active+undersized+degraded, 151 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 15/627 objects degraded (2.392%) 2026-03-09T18:46:02.587 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:02 vm00 bash[65531]: cluster 2026-03-09T18:46:02.221756+0000 mon.a (mon.0) 286 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 15/627 objects degraded (2.392%), 5 pgs degraded) 2026-03-09T18:46:02.587 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:02 vm00 bash[65531]: cluster 2026-03-09T18:46:02.221756+0000 mon.a (mon.0) 286 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 15/627 objects degraded (2.392%), 5 pgs degraded) 2026-03-09T18:46:02.587 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:02 vm00 bash[65531]: cluster 2026-03-09T18:46:02.221771+0000 mon.a (mon.0) 287 : cluster [INF] Cluster is now healthy 2026-03-09T18:46:02.587 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:02 vm00 bash[65531]: cluster 2026-03-09T18:46:02.221771+0000 mon.a (mon.0) 287 : cluster [INF] Cluster is now healthy 2026-03-09T18:46:02.588 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:02 vm00 bash[69512]: cluster 2026-03-09T18:46:02.221756+0000 mon.a (mon.0) 286 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 15/627 objects degraded (2.392%), 5 pgs degraded) 2026-03-09T18:46:02.588 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:02 vm00 bash[69512]: cluster 2026-03-09T18:46:02.221756+0000 mon.a (mon.0) 286 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 15/627 objects degraded (2.392%), 5 pgs degraded) 2026-03-09T18:46:02.588 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:02 vm00 bash[69512]: cluster 2026-03-09T18:46:02.221771+0000 mon.a (mon.0) 287 : cluster [INF] Cluster is now healthy 2026-03-09T18:46:02.588 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:02 vm00 bash[69512]: cluster 2026-03-09T18:46:02.221771+0000 mon.a (mon.0) 287 : cluster [INF] Cluster is now healthy 2026-03-09T18:46:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:02 vm08 bash[46122]: cluster 2026-03-09T18:46:02.221756+0000 mon.a (mon.0) 286 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 15/627 objects degraded (2.392%), 5 pgs degraded) 2026-03-09T18:46:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:02 vm08 bash[46122]: cluster 2026-03-09T18:46:02.221756+0000 mon.a (mon.0) 286 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 15/627 objects degraded (2.392%), 5 pgs degraded) 2026-03-09T18:46:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:02 vm08 bash[46122]: cluster 2026-03-09T18:46:02.221771+0000 mon.a (mon.0) 287 : cluster [INF] Cluster is now healthy 2026-03-09T18:46:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:02 vm08 bash[46122]: cluster 2026-03-09T18:46:02.221771+0000 mon.a (mon.0) 287 : cluster [INF] Cluster is now healthy 2026-03-09T18:46:03.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:03 vm00 bash[65531]: audit 2026-03-09T18:46:01.537858+0000 mgr.y (mgr.44107) 195 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:03.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:03 vm00 bash[65531]: audit 2026-03-09T18:46:01.537858+0000 mgr.y (mgr.44107) 195 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:03.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:03 vm00 bash[65531]: cluster 2026-03-09T18:46:02.077563+0000 mgr.y (mgr.44107) 196 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T18:46:03.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:03 vm00 bash[65531]: cluster 2026-03-09T18:46:02.077563+0000 mgr.y (mgr.44107) 196 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T18:46:03.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:03 vm00 bash[65531]: audit 2026-03-09T18:46:03.099581+0000 mon.c (mon.1) 199 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:46:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:03 vm00 bash[65531]: audit 2026-03-09T18:46:03.099581+0000 mon.c (mon.1) 199 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:46:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:03 vm00 bash[65531]: audit 2026-03-09T18:46:03.136973+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:03 vm00 bash[65531]: audit 2026-03-09T18:46:03.136973+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:03.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:03 vm00 bash[69512]: audit 2026-03-09T18:46:01.537858+0000 mgr.y (mgr.44107) 195 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:03.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:03 vm00 bash[69512]: audit 2026-03-09T18:46:01.537858+0000 mgr.y (mgr.44107) 195 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:03.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:03 vm00 bash[69512]: cluster 2026-03-09T18:46:02.077563+0000 mgr.y (mgr.44107) 196 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T18:46:03.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:03 vm00 bash[69512]: cluster 2026-03-09T18:46:02.077563+0000 mgr.y (mgr.44107) 196 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T18:46:03.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:03 vm00 bash[69512]: audit 2026-03-09T18:46:03.099581+0000 mon.c (mon.1) 199 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:46:03.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:03 vm00 bash[69512]: audit 2026-03-09T18:46:03.099581+0000 mon.c (mon.1) 199 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:46:03.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:03 vm00 bash[69512]: audit 2026-03-09T18:46:03.136973+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:03.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:03 vm00 bash[69512]: audit 2026-03-09T18:46:03.136973+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:03 vm08 bash[46122]: audit 2026-03-09T18:46:01.537858+0000 mgr.y (mgr.44107) 195 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:03 vm08 bash[46122]: audit 2026-03-09T18:46:01.537858+0000 mgr.y (mgr.44107) 195 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:03 vm08 bash[46122]: cluster 2026-03-09T18:46:02.077563+0000 mgr.y (mgr.44107) 196 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T18:46:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:03 vm08 bash[46122]: cluster 2026-03-09T18:46:02.077563+0000 mgr.y (mgr.44107) 196 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T18:46:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:03 vm08 bash[46122]: audit 2026-03-09T18:46:03.099581+0000 mon.c (mon.1) 199 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:46:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:03 vm08 bash[46122]: audit 2026-03-09T18:46:03.099581+0000 mon.c (mon.1) 199 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:46:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:03 vm08 bash[46122]: audit 2026-03-09T18:46:03.136973+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:03 vm08 bash[46122]: audit 2026-03-09T18:46:03.136973+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.294357+0000 mon.a (mon.0) 289 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.294357+0000 mon.a (mon.0) 289 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.301679+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.301679+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.304844+0000 mon.c (mon.1) 200 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.304844+0000 mon.c (mon.1) 200 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.305784+0000 mon.c (mon.1) 201 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.305784+0000 mon.c (mon.1) 201 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.310554+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.310554+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.354119+0000 mon.c (mon.1) 202 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.354119+0000 mon.c (mon.1) 202 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.355851+0000 mon.c (mon.1) 203 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.355851+0000 mon.c (mon.1) 203 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.357632+0000 mon.c (mon.1) 204 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.357632+0000 mon.c (mon.1) 204 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: cephadm 2026-03-09T18:46:03.358304+0000 mgr.y (mgr.44107) 197 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: cephadm 2026-03-09T18:46:03.358304+0000 mgr.y (mgr.44107) 197 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.362450+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.362450+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.367114+0000 mon.c (mon.1) 205 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.367114+0000 mon.c (mon.1) 205 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.368223+0000 mon.c (mon.1) 206 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.368223+0000 mon.c (mon.1) 206 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.369311+0000 mon.c (mon.1) 207 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.369311+0000 mon.c (mon.1) 207 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.370340+0000 mon.c (mon.1) 208 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.370340+0000 mon.c (mon.1) 208 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.371319+0000 mon.c (mon.1) 209 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.371319+0000 mon.c (mon.1) 209 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.372291+0000 mon.c (mon.1) 210 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.372291+0000 mon.c (mon.1) 210 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: cephadm 2026-03-09T18:46:03.372971+0000 mgr.y (mgr.44107) 198 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: cephadm 2026-03-09T18:46:03.372971+0000 mgr.y (mgr.44107) 198 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.374058+0000 mon.c (mon.1) 211 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.374058+0000 mon.c (mon.1) 211 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.374288+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.374288+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.376977+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.376977+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:46:04.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.380414+0000 mon.c (mon.1) 212 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.380414+0000 mon.c (mon.1) 212 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.380766+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.380766+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.383195+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.383195+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.385126+0000 mon.c (mon.1) 213 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.385126+0000 mon.c (mon.1) 213 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.385423+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.385423+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.387632+0000 mon.a (mon.0) 298 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.387632+0000 mon.a (mon.0) 298 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.389206+0000 mon.c (mon.1) 214 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.389206+0000 mon.c (mon.1) 214 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.389428+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.389428+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.389830+0000 mon.c (mon.1) 215 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.389830+0000 mon.c (mon.1) 215 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.390014+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.390014+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.390436+0000 mon.c (mon.1) 216 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.390436+0000 mon.c (mon.1) 216 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.390635+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.390635+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.391062+0000 mon.c (mon.1) 217 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.391062+0000 mon.c (mon.1) 217 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.391237+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.391237+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.391760+0000 mon.c (mon.1) 218 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.391760+0000 mon.c (mon.1) 218 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.391948+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.391948+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.392355+0000 mon.c (mon.1) 219 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.392355+0000 mon.c (mon.1) 219 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.392597+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.392597+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.393046+0000 mon.c (mon.1) 220 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.393046+0000 mon.c (mon.1) 220 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.393252+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.393252+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.393674+0000 mon.c (mon.1) 221 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.393674+0000 mon.c (mon.1) 221 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.393884+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.393884+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.394315+0000 mon.c (mon.1) 222 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.394315+0000 mon.c (mon.1) 222 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.394539+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.394539+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.397169+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.397169+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.398706+0000 mon.c (mon.1) 223 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.398706+0000 mon.c (mon.1) 223 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.399058+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.399058+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.399671+0000 mon.c (mon.1) 224 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.399671+0000 mon.c (mon.1) 224 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.399971+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.399971+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.400421+0000 mon.c (mon.1) 225 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.400421+0000 mon.c (mon.1) 225 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.400624+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.400624+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.401059+0000 mon.c (mon.1) 226 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.401059+0000 mon.c (mon.1) 226 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.401257+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.401257+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.402886+0000 mon.c (mon.1) 227 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.402886+0000 mon.c (mon.1) 227 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.403137+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.403137+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.404373+0000 mon.c (mon.1) 228 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.404373+0000 mon.c (mon.1) 228 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.404573+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.404573+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: cephadm 2026-03-09T18:46:03.404985+0000 mgr.y (mgr.44107) 199 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: cephadm 2026-03-09T18:46:03.404985+0000 mgr.y (mgr.44107) 199 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.405216+0000 mon.c (mon.1) 229 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.405216+0000 mon.c (mon.1) 229 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.405451+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.405451+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.408367+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.408367+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.409514+0000 mon.c (mon.1) 230 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.409514+0000 mon.c (mon.1) 230 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.706210+0000 mon.c (mon.1) 231 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.706210+0000 mon.c (mon.1) 231 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.707382+0000 mon.c (mon.1) 232 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.707382+0000 mon.c (mon.1) 232 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.720456+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.720456+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.766701+0000 mon.c (mon.1) 233 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.294357+0000 mon.a (mon.0) 289 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.294357+0000 mon.a (mon.0) 289 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.301679+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.301679+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.304844+0000 mon.c (mon.1) 200 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.304844+0000 mon.c (mon.1) 200 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.305784+0000 mon.c (mon.1) 201 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.305784+0000 mon.c (mon.1) 201 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.310554+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.310554+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.354119+0000 mon.c (mon.1) 202 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.354119+0000 mon.c (mon.1) 202 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.355851+0000 mon.c (mon.1) 203 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.355851+0000 mon.c (mon.1) 203 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.357632+0000 mon.c (mon.1) 204 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.357632+0000 mon.c (mon.1) 204 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: cephadm 2026-03-09T18:46:03.358304+0000 mgr.y (mgr.44107) 197 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: cephadm 2026-03-09T18:46:03.358304+0000 mgr.y (mgr.44107) 197 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.362450+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.362450+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.367114+0000 mon.c (mon.1) 205 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.367114+0000 mon.c (mon.1) 205 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.368223+0000 mon.c (mon.1) 206 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.368223+0000 mon.c (mon.1) 206 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.369311+0000 mon.c (mon.1) 207 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.369311+0000 mon.c (mon.1) 207 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.370340+0000 mon.c (mon.1) 208 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.370340+0000 mon.c (mon.1) 208 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.371319+0000 mon.c (mon.1) 209 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.371319+0000 mon.c (mon.1) 209 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.372291+0000 mon.c (mon.1) 210 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.372291+0000 mon.c (mon.1) 210 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: cephadm 2026-03-09T18:46:03.372971+0000 mgr.y (mgr.44107) 198 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: cephadm 2026-03-09T18:46:03.372971+0000 mgr.y (mgr.44107) 198 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.374058+0000 mon.c (mon.1) 211 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.374058+0000 mon.c (mon.1) 211 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.374288+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.374288+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.376977+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.376977+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.380414+0000 mon.c (mon.1) 212 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.380414+0000 mon.c (mon.1) 212 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.380766+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.380766+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.383195+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.383195+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.385126+0000 mon.c (mon.1) 213 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.385126+0000 mon.c (mon.1) 213 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.385423+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.385423+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.387632+0000 mon.a (mon.0) 298 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.387632+0000 mon.a (mon.0) 298 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.389206+0000 mon.c (mon.1) 214 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.389206+0000 mon.c (mon.1) 214 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.389428+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.389428+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.389830+0000 mon.c (mon.1) 215 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.389830+0000 mon.c (mon.1) 215 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.390014+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.390014+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.390436+0000 mon.c (mon.1) 216 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.390436+0000 mon.c (mon.1) 216 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.390635+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.390635+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.391062+0000 mon.c (mon.1) 217 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.391062+0000 mon.c (mon.1) 217 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.391237+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.391237+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.391760+0000 mon.c (mon.1) 218 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.391760+0000 mon.c (mon.1) 218 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.391948+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.391948+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.392355+0000 mon.c (mon.1) 219 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.392355+0000 mon.c (mon.1) 219 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.392597+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.392597+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.393046+0000 mon.c (mon.1) 220 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.393046+0000 mon.c (mon.1) 220 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.393252+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.393252+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.393674+0000 mon.c (mon.1) 221 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.393674+0000 mon.c (mon.1) 221 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.393884+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.393884+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.394315+0000 mon.c (mon.1) 222 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.394315+0000 mon.c (mon.1) 222 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.394539+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.394539+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:46:04.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.397169+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.397169+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.398706+0000 mon.c (mon.1) 223 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.398706+0000 mon.c (mon.1) 223 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.399058+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.399058+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.399671+0000 mon.c (mon.1) 224 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.399671+0000 mon.c (mon.1) 224 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.399971+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.399971+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.400421+0000 mon.c (mon.1) 225 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.400421+0000 mon.c (mon.1) 225 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.400624+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.400624+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.401059+0000 mon.c (mon.1) 226 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.401059+0000 mon.c (mon.1) 226 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.401257+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.401257+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.402886+0000 mon.c (mon.1) 227 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.402886+0000 mon.c (mon.1) 227 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.403137+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.403137+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.404373+0000 mon.c (mon.1) 228 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.404373+0000 mon.c (mon.1) 228 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.404573+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.404573+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: cephadm 2026-03-09T18:46:03.404985+0000 mgr.y (mgr.44107) 199 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: cephadm 2026-03-09T18:46:03.404985+0000 mgr.y (mgr.44107) 199 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.405216+0000 mon.c (mon.1) 229 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.405216+0000 mon.c (mon.1) 229 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.405451+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.405451+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.408367+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.408367+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.409514+0000 mon.c (mon.1) 230 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.409514+0000 mon.c (mon.1) 230 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.706210+0000 mon.c (mon.1) 231 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.706210+0000 mon.c (mon.1) 231 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.707382+0000 mon.c (mon.1) 232 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.707382+0000 mon.c (mon.1) 232 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.720456+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.720456+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.766701+0000 mon.c (mon.1) 233 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.766701+0000 mon.c (mon.1) 233 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.768210+0000 mon.c (mon.1) 234 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.768210+0000 mon.c (mon.1) 234 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.769325+0000 mon.c (mon.1) 235 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.769325+0000 mon.c (mon.1) 235 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.773779+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:04 vm00 bash[65531]: audit 2026-03-09T18:46:03.773779+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.766701+0000 mon.c (mon.1) 233 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.768210+0000 mon.c (mon.1) 234 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.768210+0000 mon.c (mon.1) 234 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.769325+0000 mon.c (mon.1) 235 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.769325+0000 mon.c (mon.1) 235 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.773779+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:04 vm00 bash[69512]: audit 2026-03-09T18:46:03.773779+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.294357+0000 mon.a (mon.0) 289 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.294357+0000 mon.a (mon.0) 289 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.301679+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.301679+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.304844+0000 mon.c (mon.1) 200 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.304844+0000 mon.c (mon.1) 200 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.305784+0000 mon.c (mon.1) 201 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.305784+0000 mon.c (mon.1) 201 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.310554+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.310554+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.354119+0000 mon.c (mon.1) 202 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.354119+0000 mon.c (mon.1) 202 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.355851+0000 mon.c (mon.1) 203 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.355851+0000 mon.c (mon.1) 203 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.357632+0000 mon.c (mon.1) 204 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.357632+0000 mon.c (mon.1) 204 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: cephadm 2026-03-09T18:46:03.358304+0000 mgr.y (mgr.44107) 197 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: cephadm 2026-03-09T18:46:03.358304+0000 mgr.y (mgr.44107) 197 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.362450+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.362450+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.367114+0000 mon.c (mon.1) 205 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.367114+0000 mon.c (mon.1) 205 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.368223+0000 mon.c (mon.1) 206 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.368223+0000 mon.c (mon.1) 206 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.369311+0000 mon.c (mon.1) 207 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.369311+0000 mon.c (mon.1) 207 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.370340+0000 mon.c (mon.1) 208 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.370340+0000 mon.c (mon.1) 208 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.371319+0000 mon.c (mon.1) 209 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.371319+0000 mon.c (mon.1) 209 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.372291+0000 mon.c (mon.1) 210 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.372291+0000 mon.c (mon.1) 210 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: cephadm 2026-03-09T18:46:03.372971+0000 mgr.y (mgr.44107) 198 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: cephadm 2026-03-09T18:46:03.372971+0000 mgr.y (mgr.44107) 198 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.374058+0000 mon.c (mon.1) 211 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.374058+0000 mon.c (mon.1) 211 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.374288+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.374288+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.376977+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.376977+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.380414+0000 mon.c (mon.1) 212 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.380414+0000 mon.c (mon.1) 212 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.380766+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.380766+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.383195+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.383195+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.385126+0000 mon.c (mon.1) 213 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.385126+0000 mon.c (mon.1) 213 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.385423+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.385423+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.387632+0000 mon.a (mon.0) 298 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.387632+0000 mon.a (mon.0) 298 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.389206+0000 mon.c (mon.1) 214 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.389206+0000 mon.c (mon.1) 214 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:46:04.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.389428+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.389428+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.389830+0000 mon.c (mon.1) 215 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.389830+0000 mon.c (mon.1) 215 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.390014+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.390014+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.390436+0000 mon.c (mon.1) 216 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.390436+0000 mon.c (mon.1) 216 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.390635+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.390635+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.391062+0000 mon.c (mon.1) 217 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.391062+0000 mon.c (mon.1) 217 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.391237+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.391237+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.391760+0000 mon.c (mon.1) 218 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.391760+0000 mon.c (mon.1) 218 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.391948+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.391948+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.392355+0000 mon.c (mon.1) 219 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.392355+0000 mon.c (mon.1) 219 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.392597+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.392597+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.393046+0000 mon.c (mon.1) 220 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.393046+0000 mon.c (mon.1) 220 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.393252+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.393252+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.393674+0000 mon.c (mon.1) 221 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.393674+0000 mon.c (mon.1) 221 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.393884+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.393884+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.394315+0000 mon.c (mon.1) 222 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.394315+0000 mon.c (mon.1) 222 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.394539+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.394539+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.397169+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.397169+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.398706+0000 mon.c (mon.1) 223 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.398706+0000 mon.c (mon.1) 223 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.399058+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.399058+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.399671+0000 mon.c (mon.1) 224 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.399671+0000 mon.c (mon.1) 224 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.399971+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.399971+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.400421+0000 mon.c (mon.1) 225 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.400421+0000 mon.c (mon.1) 225 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.400624+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.400624+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.401059+0000 mon.c (mon.1) 226 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.401059+0000 mon.c (mon.1) 226 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.401257+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.401257+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.402886+0000 mon.c (mon.1) 227 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.402886+0000 mon.c (mon.1) 227 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.403137+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.403137+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.404373+0000 mon.c (mon.1) 228 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.404373+0000 mon.c (mon.1) 228 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.404573+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.404573+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:46:04.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: cephadm 2026-03-09T18:46:03.404985+0000 mgr.y (mgr.44107) 199 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:46:04.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: cephadm 2026-03-09T18:46:03.404985+0000 mgr.y (mgr.44107) 199 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:46:04.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.405216+0000 mon.c (mon.1) 229 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:46:04.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.405216+0000 mon.c (mon.1) 229 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:46:04.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.405451+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:46:04.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.405451+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:46:04.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.408367+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:46:04.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.408367+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:46:04.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.409514+0000 mon.c (mon.1) 230 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:04.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.409514+0000 mon.c (mon.1) 230 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:04.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.706210+0000 mon.c (mon.1) 231 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:04.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.706210+0000 mon.c (mon.1) 231 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:04.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.707382+0000 mon.c (mon.1) 232 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:04.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.707382+0000 mon.c (mon.1) 232 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:04.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.720456+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.720456+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.766701+0000 mon.c (mon.1) 233 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:04.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.766701+0000 mon.c (mon.1) 233 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:04.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.768210+0000 mon.c (mon.1) 234 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:04.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.768210+0000 mon.c (mon.1) 234 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:04.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.769325+0000 mon.c (mon.1) 235 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:04.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.769325+0000 mon.c (mon.1) 235 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:04.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.773779+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:04.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:04 vm08 bash[46122]: audit 2026-03-09T18:46:03.773779+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:05.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:05 vm00 bash[65531]: cluster 2026-03-09T18:46:04.077841+0000 mgr.y (mgr.44107) 200 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 629 B/s rd, 0 op/s 2026-03-09T18:46:05.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:05 vm00 bash[65531]: cluster 2026-03-09T18:46:04.077841+0000 mgr.y (mgr.44107) 200 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 629 B/s rd, 0 op/s 2026-03-09T18:46:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:05 vm00 bash[69512]: cluster 2026-03-09T18:46:04.077841+0000 mgr.y (mgr.44107) 200 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 629 B/s rd, 0 op/s 2026-03-09T18:46:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:05 vm00 bash[69512]: cluster 2026-03-09T18:46:04.077841+0000 mgr.y (mgr.44107) 200 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 629 B/s rd, 0 op/s 2026-03-09T18:46:05.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:05 vm08 bash[46122]: cluster 2026-03-09T18:46:04.077841+0000 mgr.y (mgr.44107) 200 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 629 B/s rd, 0 op/s 2026-03-09T18:46:05.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:05 vm08 bash[46122]: cluster 2026-03-09T18:46:04.077841+0000 mgr.y (mgr.44107) 200 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 629 B/s rd, 0 op/s 2026-03-09T18:46:07.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:07 vm00 bash[65531]: cluster 2026-03-09T18:46:06.078350+0000 mgr.y (mgr.44107) 201 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:46:07.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:07 vm00 bash[65531]: cluster 2026-03-09T18:46:06.078350+0000 mgr.y (mgr.44107) 201 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:46:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:07 vm00 bash[69512]: cluster 2026-03-09T18:46:06.078350+0000 mgr.y (mgr.44107) 201 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:46:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:07 vm00 bash[69512]: cluster 2026-03-09T18:46:06.078350+0000 mgr.y (mgr.44107) 201 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:46:07.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:07 vm08 bash[46122]: cluster 2026-03-09T18:46:06.078350+0000 mgr.y (mgr.44107) 201 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:46:07.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:07 vm08 bash[46122]: cluster 2026-03-09T18:46:06.078350+0000 mgr.y (mgr.44107) 201 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:46:09.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:09 vm08 bash[46122]: cluster 2026-03-09T18:46:08.078707+0000 mgr.y (mgr.44107) 202 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 919 B/s rd, 0 op/s 2026-03-09T18:46:09.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:09 vm08 bash[46122]: cluster 2026-03-09T18:46:08.078707+0000 mgr.y (mgr.44107) 202 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 919 B/s rd, 0 op/s 2026-03-09T18:46:09.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:09 vm08 bash[46122]: audit 2026-03-09T18:46:08.148035+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:09.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:09 vm08 bash[46122]: audit 2026-03-09T18:46:08.148035+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:09.523 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:09 vm00 bash[65531]: cluster 2026-03-09T18:46:08.078707+0000 mgr.y (mgr.44107) 202 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 919 B/s rd, 0 op/s 2026-03-09T18:46:09.523 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:09 vm00 bash[65531]: cluster 2026-03-09T18:46:08.078707+0000 mgr.y (mgr.44107) 202 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 919 B/s rd, 0 op/s 2026-03-09T18:46:09.523 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:09 vm00 bash[65531]: audit 2026-03-09T18:46:08.148035+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:09.523 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:09 vm00 bash[65531]: audit 2026-03-09T18:46:08.148035+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:09.523 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:09 vm00 bash[69512]: cluster 2026-03-09T18:46:08.078707+0000 mgr.y (mgr.44107) 202 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 919 B/s rd, 0 op/s 2026-03-09T18:46:09.523 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:09 vm00 bash[69512]: cluster 2026-03-09T18:46:08.078707+0000 mgr.y (mgr.44107) 202 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 919 B/s rd, 0 op/s 2026-03-09T18:46:09.523 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:09 vm00 bash[69512]: audit 2026-03-09T18:46:08.148035+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:09.523 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:09 vm00 bash[69512]: audit 2026-03-09T18:46:08.148035+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:09.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:46:09 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:46:09] "GET /metrics HTTP/1.1" 200 37827 "" "Prometheus/2.51.0" 2026-03-09T18:46:11.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:11 vm08 bash[46122]: cluster 2026-03-09T18:46:10.079150+0000 mgr.y (mgr.44107) 203 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:11.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:11 vm08 bash[46122]: cluster 2026-03-09T18:46:10.079150+0000 mgr.y (mgr.44107) 203 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:11.543 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:11 vm00 bash[65531]: cluster 2026-03-09T18:46:10.079150+0000 mgr.y (mgr.44107) 203 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:11.543 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:11 vm00 bash[65531]: cluster 2026-03-09T18:46:10.079150+0000 mgr.y (mgr.44107) 203 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:11.543 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:11 vm00 bash[69512]: cluster 2026-03-09T18:46:10.079150+0000 mgr.y (mgr.44107) 203 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:11.543 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:11 vm00 bash[69512]: cluster 2026-03-09T18:46:10.079150+0000 mgr.y (mgr.44107) 203 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:13.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:13 vm08 bash[46122]: audit 2026-03-09T18:46:11.547094+0000 mgr.y (mgr.44107) 204 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:13.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:13 vm08 bash[46122]: audit 2026-03-09T18:46:11.547094+0000 mgr.y (mgr.44107) 204 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:13.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:13 vm08 bash[46122]: cluster 2026-03-09T18:46:12.079428+0000 mgr.y (mgr.44107) 205 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:46:13.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:13 vm08 bash[46122]: cluster 2026-03-09T18:46:12.079428+0000 mgr.y (mgr.44107) 205 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:46:13.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:13 vm00 bash[65531]: audit 2026-03-09T18:46:11.547094+0000 mgr.y (mgr.44107) 204 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:13.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:13 vm00 bash[65531]: audit 2026-03-09T18:46:11.547094+0000 mgr.y (mgr.44107) 204 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:13.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:13 vm00 bash[65531]: cluster 2026-03-09T18:46:12.079428+0000 mgr.y (mgr.44107) 205 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:46:13.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:13 vm00 bash[65531]: cluster 2026-03-09T18:46:12.079428+0000 mgr.y (mgr.44107) 205 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:46:13.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:13 vm00 bash[69512]: audit 2026-03-09T18:46:11.547094+0000 mgr.y (mgr.44107) 204 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:13.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:13 vm00 bash[69512]: audit 2026-03-09T18:46:11.547094+0000 mgr.y (mgr.44107) 204 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:13.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:13 vm00 bash[69512]: cluster 2026-03-09T18:46:12.079428+0000 mgr.y (mgr.44107) 205 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:46:13.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:13 vm00 bash[69512]: cluster 2026-03-09T18:46:12.079428+0000 mgr.y (mgr.44107) 205 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:46:15.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:15 vm08 bash[46122]: cluster 2026-03-09T18:46:14.079718+0000 mgr.y (mgr.44107) 206 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:15.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:15 vm08 bash[46122]: cluster 2026-03-09T18:46:14.079718+0000 mgr.y (mgr.44107) 206 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:15.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:15 vm00 bash[65531]: cluster 2026-03-09T18:46:14.079718+0000 mgr.y (mgr.44107) 206 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:15.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:15 vm00 bash[65531]: cluster 2026-03-09T18:46:14.079718+0000 mgr.y (mgr.44107) 206 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:15.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:15 vm00 bash[69512]: cluster 2026-03-09T18:46:14.079718+0000 mgr.y (mgr.44107) 206 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:15.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:15 vm00 bash[69512]: cluster 2026-03-09T18:46:14.079718+0000 mgr.y (mgr.44107) 206 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:17.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:17 vm08 bash[46122]: cluster 2026-03-09T18:46:16.080201+0000 mgr.y (mgr.44107) 207 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:46:17.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:17 vm08 bash[46122]: cluster 2026-03-09T18:46:16.080201+0000 mgr.y (mgr.44107) 207 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:46:17.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:17 vm00 bash[65531]: cluster 2026-03-09T18:46:16.080201+0000 mgr.y (mgr.44107) 207 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:46:17.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:17 vm00 bash[65531]: cluster 2026-03-09T18:46:16.080201+0000 mgr.y (mgr.44107) 207 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:46:17.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:17 vm00 bash[69512]: cluster 2026-03-09T18:46:16.080201+0000 mgr.y (mgr.44107) 207 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:46:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:17 vm00 bash[69512]: cluster 2026-03-09T18:46:16.080201+0000 mgr.y (mgr.44107) 207 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:46:18.196 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-09T18:46:18.440 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:18 vm00 bash[65531]: audit 2026-03-09T18:46:18.099665+0000 mon.c (mon.1) 236 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:46:18.440 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:18 vm00 bash[65531]: audit 2026-03-09T18:46:18.099665+0000 mon.c (mon.1) 236 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:46:18.440 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:18 vm00 bash[69512]: audit 2026-03-09T18:46:18.099665+0000 mon.c (mon.1) 236 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:46:18.440 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:18 vm00 bash[69512]: audit 2026-03-09T18:46:18.099665+0000 mon.c (mon.1) 236 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:46:18.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:18 vm08 bash[46122]: audit 2026-03-09T18:46:18.099665+0000 mon.c (mon.1) 236 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:46:18.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:18 vm08 bash[46122]: audit 2026-03-09T18:46:18.099665+0000 mon.c (mon.1) 236 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:46:18.654 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:46:18.654 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (16m) 21s ago 23m 14.5M - 0.25.0 c8568f914cd2 2a8a29ecee54 2026-03-09T18:46:18.654 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (4m) 2m ago 23m 65.0M - 10.4.0 c8b91775d855 5e0e30d27ab2 2026-03-09T18:46:18.654 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (4m) 21s ago 22m 44.0M - 3.5 e1d6a67b021e ff3da66cebe9 2026-03-09T18:46:18.654 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443,9283,8765 running (4m) 2m ago 26m 462M - 19.2.3-678-ge911bdeb 654f31e6858e e51f08afe84e 2026-03-09T18:46:18.654 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:8443,9283,8765 running (13m) 21s ago 27m 528M - 19.2.3-678-ge911bdeb 654f31e6858e 838266ced7b2 2026-03-09T18:46:18.654 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (2m) 21s ago 27m 48.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e eb9fca83668a 2026-03-09T18:46:18.654 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (3m) 2m ago 26m 37.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a343d2673f4 2026-03-09T18:46:18.654 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (2m) 21s ago 26m 44.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 5c50cbcbef6b 2026-03-09T18:46:18.654 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (16m) 21s ago 23m 7940k - 1.7.0 72c9c2088986 c2e3e3202fde 2026-03-09T18:46:18.654 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (16m) 2m ago 23m 7956k - 1.7.0 72c9c2088986 7a7d1ed8c801 2026-03-09T18:46:18.654 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (26s) 21s ago 26m 12.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1334681baf1a 2026-03-09T18:46:18.654 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (25m) 21s ago 25m 57.1M 4096M 17.2.0 e1d6a67b021e d8607e6df9d6 2026-03-09T18:46:18.654 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (78s) 21s ago 25m 43.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9a838e294e64 2026-03-09T18:46:18.654 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (95s) 21s ago 25m 68.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 161fbb574888 2026-03-09T18:46:18.654 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (24m) 2m ago 24m 54.9M 4096M 17.2.0 e1d6a67b021e 781045b06a16 2026-03-09T18:46:18.654 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (24m) 2m ago 24m 53.9M 4096M 17.2.0 e1d6a67b021e 28b71e1b7c1b 2026-03-09T18:46:18.654 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (24m) 2m ago 24m 52.7M 4096M 17.2.0 e1d6a67b021e 80e1a58dd2f5 2026-03-09T18:46:18.654 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (24m) 2m ago 24m 52.2M 4096M 17.2.0 e1d6a67b021e 4f91765b51cf 2026-03-09T18:46:18.654 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (4m) 2m ago 23m 41.3M - 2.51.0 1d3b7f56885b 2dc1789f7bf0 2026-03-09T18:46:18.654 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (23m) 21s ago 23m 89.2M - 17.2.0 e1d6a67b021e 671fa80b7e00 2026-03-09T18:46:18.654 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (23m) 2m ago 23m 89.6M - 17.2.0 e1d6a67b021e 1fbcce983317 2026-03-09T18:46:18.706 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.osd | length == 2'"'"'' 2026-03-09T18:46:19.197 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:46:19.253 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e '"'"'.up_to_date | length == 8'"'"'' 2026-03-09T18:46:19.473 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:19 vm00 bash[65531]: cluster 2026-03-09T18:46:18.080509+0000 mgr.y (mgr.44107) 208 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:19.473 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:19 vm00 bash[65531]: cluster 2026-03-09T18:46:18.080509+0000 mgr.y (mgr.44107) 208 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:19.473 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:19 vm00 bash[65531]: audit 2026-03-09T18:46:18.121769+0000 mgr.y (mgr.44107) 209 : audit [DBG] from='client.44314 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:19.473 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:19 vm00 bash[65531]: audit 2026-03-09T18:46:18.121769+0000 mgr.y (mgr.44107) 209 : audit [DBG] from='client.44314 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:19.474 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:19 vm00 bash[69512]: cluster 2026-03-09T18:46:18.080509+0000 mgr.y (mgr.44107) 208 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:19.474 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:19 vm00 bash[69512]: cluster 2026-03-09T18:46:18.080509+0000 mgr.y (mgr.44107) 208 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:19.474 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:19 vm00 bash[69512]: audit 2026-03-09T18:46:18.121769+0000 mgr.y (mgr.44107) 209 : audit [DBG] from='client.44314 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:19.474 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:19 vm00 bash[69512]: audit 2026-03-09T18:46:18.121769+0000 mgr.y (mgr.44107) 209 : audit [DBG] from='client.44314 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:19.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:19 vm08 bash[46122]: cluster 2026-03-09T18:46:18.080509+0000 mgr.y (mgr.44107) 208 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:19.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:19 vm08 bash[46122]: cluster 2026-03-09T18:46:18.080509+0000 mgr.y (mgr.44107) 208 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:19.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:19 vm08 bash[46122]: audit 2026-03-09T18:46:18.121769+0000 mgr.y (mgr.44107) 209 : audit [DBG] from='client.44314 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:19.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:19 vm08 bash[46122]: audit 2026-03-09T18:46:18.121769+0000 mgr.y (mgr.44107) 209 : audit [DBG] from='client.44314 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:19.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:46:19 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:46:19] "GET /metrics HTTP/1.1" 200 37828 "" "Prometheus/2.51.0" 2026-03-09T18:46:20.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:20 vm00 bash[65531]: audit 2026-03-09T18:46:18.653727+0000 mgr.y (mgr.44107) 210 : audit [DBG] from='client.54324 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:20 vm00 bash[65531]: audit 2026-03-09T18:46:18.653727+0000 mgr.y (mgr.44107) 210 : audit [DBG] from='client.54324 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:20 vm00 bash[65531]: audit 2026-03-09T18:46:19.190092+0000 mon.c (mon.1) 237 : audit [DBG] from='client.? 192.168.123.100:0/3115476354' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:20 vm00 bash[65531]: audit 2026-03-09T18:46:19.190092+0000 mon.c (mon.1) 237 : audit [DBG] from='client.? 192.168.123.100:0/3115476354' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:20.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:20 vm00 bash[69512]: audit 2026-03-09T18:46:18.653727+0000 mgr.y (mgr.44107) 210 : audit [DBG] from='client.54324 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:20.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:20 vm00 bash[69512]: audit 2026-03-09T18:46:18.653727+0000 mgr.y (mgr.44107) 210 : audit [DBG] from='client.54324 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:20.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:20 vm00 bash[69512]: audit 2026-03-09T18:46:19.190092+0000 mon.c (mon.1) 237 : audit [DBG] from='client.? 192.168.123.100:0/3115476354' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:20.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:20 vm00 bash[69512]: audit 2026-03-09T18:46:19.190092+0000 mon.c (mon.1) 237 : audit [DBG] from='client.? 192.168.123.100:0/3115476354' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:20.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:20 vm08 bash[46122]: audit 2026-03-09T18:46:18.653727+0000 mgr.y (mgr.44107) 210 : audit [DBG] from='client.54324 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:20.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:20 vm08 bash[46122]: audit 2026-03-09T18:46:18.653727+0000 mgr.y (mgr.44107) 210 : audit [DBG] from='client.54324 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:20.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:20 vm08 bash[46122]: audit 2026-03-09T18:46:19.190092+0000 mon.c (mon.1) 237 : audit [DBG] from='client.? 192.168.123.100:0/3115476354' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:20.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:20 vm08 bash[46122]: audit 2026-03-09T18:46:19.190092+0000 mon.c (mon.1) 237 : audit [DBG] from='client.? 192.168.123.100:0/3115476354' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:21.116 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:46:21.164 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade status' 2026-03-09T18:46:21.420 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:21 vm00 bash[69512]: audit 2026-03-09T18:46:19.691320+0000 mgr.y (mgr.44107) 211 : audit [DBG] from='client.54336 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:21.420 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:21 vm00 bash[69512]: audit 2026-03-09T18:46:19.691320+0000 mgr.y (mgr.44107) 211 : audit [DBG] from='client.54336 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:21.420 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:21 vm00 bash[69512]: cluster 2026-03-09T18:46:20.080931+0000 mgr.y (mgr.44107) 212 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:21.420 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:21 vm00 bash[69512]: cluster 2026-03-09T18:46:20.080931+0000 mgr.y (mgr.44107) 212 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:21.420 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:21 vm00 bash[65531]: audit 2026-03-09T18:46:19.691320+0000 mgr.y (mgr.44107) 211 : audit [DBG] from='client.54336 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:21.420 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:21 vm00 bash[65531]: audit 2026-03-09T18:46:19.691320+0000 mgr.y (mgr.44107) 211 : audit [DBG] from='client.54336 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:21.420 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:21 vm00 bash[65531]: cluster 2026-03-09T18:46:20.080931+0000 mgr.y (mgr.44107) 212 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:21.420 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:21 vm00 bash[65531]: cluster 2026-03-09T18:46:20.080931+0000 mgr.y (mgr.44107) 212 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:21.630 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:46:21.630 INFO:teuthology.orchestra.run.vm00.stdout: "target_image": null, 2026-03-09T18:46:21.630 INFO:teuthology.orchestra.run.vm00.stdout: "in_progress": false, 2026-03-09T18:46:21.630 INFO:teuthology.orchestra.run.vm00.stdout: "which": "", 2026-03-09T18:46:21.630 INFO:teuthology.orchestra.run.vm00.stdout: "services_complete": [], 2026-03-09T18:46:21.630 INFO:teuthology.orchestra.run.vm00.stdout: "progress": null, 2026-03-09T18:46:21.630 INFO:teuthology.orchestra.run.vm00.stdout: "message": "", 2026-03-09T18:46:21.630 INFO:teuthology.orchestra.run.vm00.stdout: "is_paused": false 2026-03-09T18:46:21.630 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:46:21.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:21 vm08 bash[46122]: audit 2026-03-09T18:46:19.691320+0000 mgr.y (mgr.44107) 211 : audit [DBG] from='client.54336 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:21.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:21 vm08 bash[46122]: audit 2026-03-09T18:46:19.691320+0000 mgr.y (mgr.44107) 211 : audit [DBG] from='client.54336 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:21.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:21 vm08 bash[46122]: cluster 2026-03-09T18:46:20.080931+0000 mgr.y (mgr.44107) 212 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:21.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:21 vm08 bash[46122]: cluster 2026-03-09T18:46:20.080931+0000 mgr.y (mgr.44107) 212 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:22.123 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph health detail' 2026-03-09T18:46:22.610 INFO:teuthology.orchestra.run.vm00.stdout:HEALTH_OK 2026-03-09T18:46:22.662 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types crash,osd' 2026-03-09T18:46:23.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:23 vm00 bash[65531]: audit 2026-03-09T18:46:21.555081+0000 mgr.y (mgr.44107) 213 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:23.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:23 vm00 bash[65531]: audit 2026-03-09T18:46:21.555081+0000 mgr.y (mgr.44107) 213 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:23.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:23 vm00 bash[65531]: audit 2026-03-09T18:46:21.633696+0000 mgr.y (mgr.44107) 214 : audit [DBG] from='client.54342 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:23.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:23 vm00 bash[65531]: audit 2026-03-09T18:46:21.633696+0000 mgr.y (mgr.44107) 214 : audit [DBG] from='client.54342 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:23.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:23 vm00 bash[65531]: cluster 2026-03-09T18:46:22.081288+0000 mgr.y (mgr.44107) 215 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:46:23.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:23 vm00 bash[65531]: cluster 2026-03-09T18:46:22.081288+0000 mgr.y (mgr.44107) 215 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:46:23.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:23 vm00 bash[65531]: audit 2026-03-09T18:46:22.613849+0000 mon.c (mon.1) 238 : audit [DBG] from='client.? 192.168.123.100:0/2585585215' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:46:23.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:23 vm00 bash[65531]: audit 2026-03-09T18:46:22.613849+0000 mon.c (mon.1) 238 : audit [DBG] from='client.? 192.168.123.100:0/2585585215' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:46:23.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:23 vm00 bash[69512]: audit 2026-03-09T18:46:21.555081+0000 mgr.y (mgr.44107) 213 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:23.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:23 vm00 bash[69512]: audit 2026-03-09T18:46:21.555081+0000 mgr.y (mgr.44107) 213 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:23.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:23 vm00 bash[69512]: audit 2026-03-09T18:46:21.633696+0000 mgr.y (mgr.44107) 214 : audit [DBG] from='client.54342 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:23.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:23 vm00 bash[69512]: audit 2026-03-09T18:46:21.633696+0000 mgr.y (mgr.44107) 214 : audit [DBG] from='client.54342 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:23.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:23 vm00 bash[69512]: cluster 2026-03-09T18:46:22.081288+0000 mgr.y (mgr.44107) 215 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:46:23.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:23 vm00 bash[69512]: cluster 2026-03-09T18:46:22.081288+0000 mgr.y (mgr.44107) 215 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:46:23.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:23 vm00 bash[69512]: audit 2026-03-09T18:46:22.613849+0000 mon.c (mon.1) 238 : audit [DBG] from='client.? 192.168.123.100:0/2585585215' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:46:23.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:23 vm00 bash[69512]: audit 2026-03-09T18:46:22.613849+0000 mon.c (mon.1) 238 : audit [DBG] from='client.? 192.168.123.100:0/2585585215' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:46:23.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:23 vm08 bash[46122]: audit 2026-03-09T18:46:21.555081+0000 mgr.y (mgr.44107) 213 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:23.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:23 vm08 bash[46122]: audit 2026-03-09T18:46:21.555081+0000 mgr.y (mgr.44107) 213 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:23.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:23 vm08 bash[46122]: audit 2026-03-09T18:46:21.633696+0000 mgr.y (mgr.44107) 214 : audit [DBG] from='client.54342 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:23.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:23 vm08 bash[46122]: audit 2026-03-09T18:46:21.633696+0000 mgr.y (mgr.44107) 214 : audit [DBG] from='client.54342 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:23.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:23 vm08 bash[46122]: cluster 2026-03-09T18:46:22.081288+0000 mgr.y (mgr.44107) 215 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:46:23.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:23 vm08 bash[46122]: cluster 2026-03-09T18:46:22.081288+0000 mgr.y (mgr.44107) 215 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:46:23.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:23 vm08 bash[46122]: audit 2026-03-09T18:46:22.613849+0000 mon.c (mon.1) 238 : audit [DBG] from='client.? 192.168.123.100:0/2585585215' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:46:23.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:23 vm08 bash[46122]: audit 2026-03-09T18:46:22.613849+0000 mon.c (mon.1) 238 : audit [DBG] from='client.? 192.168.123.100:0/2585585215' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:46:24.542 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:24 vm00 bash[69512]: audit 2026-03-09T18:46:23.125633+0000 mgr.y (mgr.44107) 216 : audit [DBG] from='client.54354 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "crash,osd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:24.542 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:24 vm00 bash[69512]: audit 2026-03-09T18:46:23.125633+0000 mgr.y (mgr.44107) 216 : audit [DBG] from='client.54354 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "crash,osd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:24.542 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:24 vm00 bash[65531]: audit 2026-03-09T18:46:23.125633+0000 mgr.y (mgr.44107) 216 : audit [DBG] from='client.54354 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "crash,osd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:24.542 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:24 vm00 bash[65531]: audit 2026-03-09T18:46:23.125633+0000 mgr.y (mgr.44107) 216 : audit [DBG] from='client.54354 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "crash,osd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:24.709 INFO:teuthology.orchestra.run.vm00.stdout:Initiating upgrade to quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:46:24.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:24 vm08 bash[46122]: audit 2026-03-09T18:46:23.125633+0000 mgr.y (mgr.44107) 216 : audit [DBG] from='client.54354 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "crash,osd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:24.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:24 vm08 bash[46122]: audit 2026-03-09T18:46:23.125633+0000 mgr.y (mgr.44107) 216 : audit [DBG] from='client.54354 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "crash,osd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:24.984 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'while ceph orch upgrade status | jq '"'"'.in_progress'"'"' | grep true && ! ceph orch upgrade status | jq '"'"'.message'"'"' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done' 2026-03-09T18:46:25.816 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:25 vm00 bash[69512]: cluster 2026-03-09T18:46:24.081644+0000 mgr.y (mgr.44107) 217 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:25.816 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:25 vm00 bash[69512]: cluster 2026-03-09T18:46:24.081644+0000 mgr.y (mgr.44107) 217 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:25.816 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:25 vm00 bash[69512]: audit 2026-03-09T18:46:24.708238+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:25.816 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:25 vm00 bash[69512]: audit 2026-03-09T18:46:24.708238+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:25.816 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:25 vm00 bash[69512]: audit 2026-03-09T18:46:24.713504+0000 mon.c (mon.1) 239 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:25.816 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:25 vm00 bash[69512]: audit 2026-03-09T18:46:24.713504+0000 mon.c (mon.1) 239 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:25.816 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:25 vm00 bash[69512]: audit 2026-03-09T18:46:24.715522+0000 mon.c (mon.1) 240 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:25.816 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:25 vm00 bash[69512]: audit 2026-03-09T18:46:24.715522+0000 mon.c (mon.1) 240 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:25.816 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:25 vm00 bash[69512]: audit 2026-03-09T18:46:24.717511+0000 mon.c (mon.1) 241 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:25.816 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:25 vm00 bash[69512]: audit 2026-03-09T18:46:24.717511+0000 mon.c (mon.1) 241 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:25.816 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:25 vm00 bash[69512]: audit 2026-03-09T18:46:24.723792+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:25.816 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:25 vm00 bash[69512]: audit 2026-03-09T18:46:24.723792+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:26.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:25 vm00 bash[65531]: cluster 2026-03-09T18:46:24.081644+0000 mgr.y (mgr.44107) 217 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:26.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:25 vm00 bash[65531]: cluster 2026-03-09T18:46:24.081644+0000 mgr.y (mgr.44107) 217 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:26.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:25 vm00 bash[65531]: audit 2026-03-09T18:46:24.708238+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:26.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:25 vm00 bash[65531]: audit 2026-03-09T18:46:24.708238+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:26.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:25 vm00 bash[65531]: audit 2026-03-09T18:46:24.713504+0000 mon.c (mon.1) 239 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:26.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:25 vm00 bash[65531]: audit 2026-03-09T18:46:24.713504+0000 mon.c (mon.1) 239 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:26.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:25 vm00 bash[65531]: audit 2026-03-09T18:46:24.715522+0000 mon.c (mon.1) 240 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:26.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:25 vm00 bash[65531]: audit 2026-03-09T18:46:24.715522+0000 mon.c (mon.1) 240 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:26.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:25 vm00 bash[65531]: audit 2026-03-09T18:46:24.717511+0000 mon.c (mon.1) 241 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:26.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:25 vm00 bash[65531]: audit 2026-03-09T18:46:24.717511+0000 mon.c (mon.1) 241 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:26.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:25 vm00 bash[65531]: audit 2026-03-09T18:46:24.723792+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:26.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:25 vm00 bash[65531]: audit 2026-03-09T18:46:24.723792+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:26.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:25 vm08 bash[46122]: cluster 2026-03-09T18:46:24.081644+0000 mgr.y (mgr.44107) 217 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:26.232 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:25 vm08 bash[46122]: cluster 2026-03-09T18:46:24.081644+0000 mgr.y (mgr.44107) 217 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:26.232 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:25 vm08 bash[46122]: audit 2026-03-09T18:46:24.708238+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:26.232 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:25 vm08 bash[46122]: audit 2026-03-09T18:46:24.708238+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:26.232 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:25 vm08 bash[46122]: audit 2026-03-09T18:46:24.713504+0000 mon.c (mon.1) 239 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:26.232 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:25 vm08 bash[46122]: audit 2026-03-09T18:46:24.713504+0000 mon.c (mon.1) 239 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:26.232 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:25 vm08 bash[46122]: audit 2026-03-09T18:46:24.715522+0000 mon.c (mon.1) 240 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:26.232 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:25 vm08 bash[46122]: audit 2026-03-09T18:46:24.715522+0000 mon.c (mon.1) 240 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:26.232 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:25 vm08 bash[46122]: audit 2026-03-09T18:46:24.717511+0000 mon.c (mon.1) 241 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:26.232 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:25 vm08 bash[46122]: audit 2026-03-09T18:46:24.717511+0000 mon.c (mon.1) 241 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:26.232 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:25 vm08 bash[46122]: audit 2026-03-09T18:46:24.723792+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:26.232 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:25 vm08 bash[46122]: audit 2026-03-09T18:46:24.723792+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:26.661 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:46:26.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:26 vm00 bash[65531]: cephadm 2026-03-09T18:46:24.702114+0000 mgr.y (mgr.44107) 218 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:46:26.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:26 vm00 bash[65531]: cephadm 2026-03-09T18:46:24.702114+0000 mgr.y (mgr.44107) 218 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:46:26.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:26 vm00 bash[65531]: cephadm 2026-03-09T18:46:24.778303+0000 mgr.y (mgr.44107) 219 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:46:26.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:26 vm00 bash[65531]: cephadm 2026-03-09T18:46:24.778303+0000 mgr.y (mgr.44107) 219 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:46:26.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:26 vm00 bash[65531]: cluster 2026-03-09T18:46:26.082157+0000 mgr.y (mgr.44107) 220 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:46:26.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:26 vm00 bash[65531]: cluster 2026-03-09T18:46:26.082157+0000 mgr.y (mgr.44107) 220 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:46:26.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:26 vm00 bash[65531]: audit 2026-03-09T18:46:26.519850+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:26.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:26 vm00 bash[65531]: audit 2026-03-09T18:46:26.519850+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:26.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:26 vm00 bash[65531]: audit 2026-03-09T18:46:26.523670+0000 mon.c (mon.1) 242 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:26.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:26 vm00 bash[65531]: audit 2026-03-09T18:46:26.523670+0000 mon.c (mon.1) 242 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:26.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:26 vm00 bash[65531]: audit 2026-03-09T18:46:26.525183+0000 mon.c (mon.1) 243 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:26.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:26 vm00 bash[65531]: audit 2026-03-09T18:46:26.525183+0000 mon.c (mon.1) 243 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:26.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:26 vm00 bash[65531]: audit 2026-03-09T18:46:26.529422+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:26.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:26 vm00 bash[65531]: audit 2026-03-09T18:46:26.529422+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:26.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:26 vm00 bash[69512]: cephadm 2026-03-09T18:46:24.702114+0000 mgr.y (mgr.44107) 218 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:46:26.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:26 vm00 bash[69512]: cephadm 2026-03-09T18:46:24.702114+0000 mgr.y (mgr.44107) 218 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:46:26.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:26 vm00 bash[69512]: cephadm 2026-03-09T18:46:24.778303+0000 mgr.y (mgr.44107) 219 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:46:26.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:26 vm00 bash[69512]: cephadm 2026-03-09T18:46:24.778303+0000 mgr.y (mgr.44107) 219 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:46:26.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:26 vm00 bash[69512]: cluster 2026-03-09T18:46:26.082157+0000 mgr.y (mgr.44107) 220 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:46:26.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:26 vm00 bash[69512]: cluster 2026-03-09T18:46:26.082157+0000 mgr.y (mgr.44107) 220 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:46:26.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:26 vm00 bash[69512]: audit 2026-03-09T18:46:26.519850+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:26.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:26 vm00 bash[69512]: audit 2026-03-09T18:46:26.519850+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:26.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:26 vm00 bash[69512]: audit 2026-03-09T18:46:26.523670+0000 mon.c (mon.1) 242 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:26.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:26 vm00 bash[69512]: audit 2026-03-09T18:46:26.523670+0000 mon.c (mon.1) 242 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:26.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:26 vm00 bash[69512]: audit 2026-03-09T18:46:26.525183+0000 mon.c (mon.1) 243 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:26.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:26 vm00 bash[69512]: audit 2026-03-09T18:46:26.525183+0000 mon.c (mon.1) 243 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:26.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:26 vm00 bash[69512]: audit 2026-03-09T18:46:26.529422+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:26.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:26 vm00 bash[69512]: audit 2026-03-09T18:46:26.529422+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:26.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:26 vm08 bash[46122]: cephadm 2026-03-09T18:46:24.702114+0000 mgr.y (mgr.44107) 218 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:46:26.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:26 vm08 bash[46122]: cephadm 2026-03-09T18:46:24.702114+0000 mgr.y (mgr.44107) 218 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:46:26.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:26 vm08 bash[46122]: cephadm 2026-03-09T18:46:24.778303+0000 mgr.y (mgr.44107) 219 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:46:26.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:26 vm08 bash[46122]: cephadm 2026-03-09T18:46:24.778303+0000 mgr.y (mgr.44107) 219 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:46:26.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:26 vm08 bash[46122]: cluster 2026-03-09T18:46:26.082157+0000 mgr.y (mgr.44107) 220 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:46:26.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:26 vm08 bash[46122]: cluster 2026-03-09T18:46:26.082157+0000 mgr.y (mgr.44107) 220 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:46:26.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:26 vm08 bash[46122]: audit 2026-03-09T18:46:26.519850+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:26.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:26 vm08 bash[46122]: audit 2026-03-09T18:46:26.519850+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:26.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:26 vm08 bash[46122]: audit 2026-03-09T18:46:26.523670+0000 mon.c (mon.1) 242 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:26.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:26 vm08 bash[46122]: audit 2026-03-09T18:46:26.523670+0000 mon.c (mon.1) 242 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:26.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:26 vm08 bash[46122]: audit 2026-03-09T18:46:26.525183+0000 mon.c (mon.1) 243 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:26.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:26 vm08 bash[46122]: audit 2026-03-09T18:46:26.525183+0000 mon.c (mon.1) 243 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:26.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:26 vm08 bash[46122]: audit 2026-03-09T18:46:26.529422+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:26.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:26 vm08 bash[46122]: audit 2026-03-09T18:46:26.529422+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:27.129 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:46:27.129 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (16m) 30s ago 23m 14.5M - 0.25.0 c8568f914cd2 2a8a29ecee54 2026-03-09T18:46:27.129 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (4m) 2m ago 23m 65.0M - 10.4.0 c8b91775d855 5e0e30d27ab2 2026-03-09T18:46:27.129 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (4m) 30s ago 23m 44.0M - 3.5 e1d6a67b021e ff3da66cebe9 2026-03-09T18:46:27.129 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443,9283,8765 running (4m) 2m ago 26m 462M - 19.2.3-678-ge911bdeb 654f31e6858e e51f08afe84e 2026-03-09T18:46:27.129 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:8443,9283,8765 running (13m) 30s ago 27m 528M - 19.2.3-678-ge911bdeb 654f31e6858e 838266ced7b2 2026-03-09T18:46:27.129 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (2m) 30s ago 27m 48.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e eb9fca83668a 2026-03-09T18:46:27.129 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (3m) 2m ago 26m 37.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a343d2673f4 2026-03-09T18:46:27.129 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (2m) 30s ago 26m 44.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 5c50cbcbef6b 2026-03-09T18:46:27.129 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (16m) 30s ago 23m 7940k - 1.7.0 72c9c2088986 c2e3e3202fde 2026-03-09T18:46:27.129 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (16m) 2m ago 23m 7956k - 1.7.0 72c9c2088986 7a7d1ed8c801 2026-03-09T18:46:27.129 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (35s) 30s ago 26m 12.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1334681baf1a 2026-03-09T18:46:27.129 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (25m) 30s ago 25m 57.1M 4096M 17.2.0 e1d6a67b021e d8607e6df9d6 2026-03-09T18:46:27.129 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (87s) 30s ago 25m 43.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9a838e294e64 2026-03-09T18:46:27.129 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (104s) 30s ago 25m 68.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 161fbb574888 2026-03-09T18:46:27.129 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (25m) 2m ago 25m 54.9M 4096M 17.2.0 e1d6a67b021e 781045b06a16 2026-03-09T18:46:27.129 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (24m) 2m ago 24m 53.9M 4096M 17.2.0 e1d6a67b021e 28b71e1b7c1b 2026-03-09T18:46:27.129 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (24m) 2m ago 24m 52.7M 4096M 17.2.0 e1d6a67b021e 80e1a58dd2f5 2026-03-09T18:46:27.129 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (24m) 2m ago 24m 52.2M 4096M 17.2.0 e1d6a67b021e 4f91765b51cf 2026-03-09T18:46:27.130 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (4m) 2m ago 23m 41.3M - 2.51.0 1d3b7f56885b 2dc1789f7bf0 2026-03-09T18:46:27.130 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (23m) 30s ago 23m 89.2M - 17.2.0 e1d6a67b021e 671fa80b7e00 2026-03-09T18:46:27.130 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (23m) 2m ago 23m 89.6M - 17.2.0 e1d6a67b021e 1fbcce983317 2026-03-09T18:46:27.440 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:46:27.440 INFO:teuthology.orchestra.run.vm00.stdout: "mon": { 2026-03-09T18:46:27.440 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-09T18:46:27.440 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:46:27.440 INFO:teuthology.orchestra.run.vm00.stdout: "mgr": { 2026-03-09T18:46:27.440 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:46:27.440 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:46:27.440 INFO:teuthology.orchestra.run.vm00.stdout: "osd": { 2026-03-09T18:46:27.440 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 5, 2026-03-09T18:46:27.440 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-09T18:46:27.440 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:46:27.440 INFO:teuthology.orchestra.run.vm00.stdout: "rgw": { 2026-03-09T18:46:27.440 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-09T18:46:27.440 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:46:27.440 INFO:teuthology.orchestra.run.vm00.stdout: "overall": { 2026-03-09T18:46:27.440 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 7, 2026-03-09T18:46:27.440 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 8 2026-03-09T18:46:27.440 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:46:27.440 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: cephadm 2026-03-09T18:46:26.522514+0000 mgr.y (mgr.44107) 221 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: cephadm 2026-03-09T18:46:26.522514+0000 mgr.y (mgr.44107) 221 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: cephadm 2026-03-09T18:46:26.522542+0000 mgr.y (mgr.44107) 222 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: cephadm 2026-03-09T18:46:26.522542+0000 mgr.y (mgr.44107) 222 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: cephadm 2026-03-09T18:46:26.525820+0000 mgr.y (mgr.44107) 223 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: cephadm 2026-03-09T18:46:26.525820+0000 mgr.y (mgr.44107) 223 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: audit 2026-03-09T18:46:26.588429+0000 mon.c (mon.1) 244 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: audit 2026-03-09T18:46:26.588429+0000 mon.c (mon.1) 244 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: cephadm 2026-03-09T18:46:26.589026+0000 mgr.y (mgr.44107) 224 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: cephadm 2026-03-09T18:46:26.589026+0000 mgr.y (mgr.44107) 224 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: audit 2026-03-09T18:46:26.654181+0000 mgr.y (mgr.44107) 225 : audit [DBG] from='client.44350 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: audit 2026-03-09T18:46:26.654181+0000 mgr.y (mgr.44107) 225 : audit [DBG] from='client.44350 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: audit 2026-03-09T18:46:26.842710+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: audit 2026-03-09T18:46:26.842710+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: audit 2026-03-09T18:46:26.863425+0000 mon.c (mon.1) 245 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: audit 2026-03-09T18:46:26.863425+0000 mon.c (mon.1) 245 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: cephadm 2026-03-09T18:46:26.864171+0000 mgr.y (mgr.44107) 226 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: cephadm 2026-03-09T18:46:26.864171+0000 mgr.y (mgr.44107) 226 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: audit 2026-03-09T18:46:26.923090+0000 mgr.y (mgr.44107) 227 : audit [DBG] from='client.44356 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: audit 2026-03-09T18:46:26.923090+0000 mgr.y (mgr.44107) 227 : audit [DBG] from='client.44356 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: audit 2026-03-09T18:46:26.937330+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: audit 2026-03-09T18:46:26.937330+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: audit 2026-03-09T18:46:26.962771+0000 mon.c (mon.1) 246 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: audit 2026-03-09T18:46:26.962771+0000 mon.c (mon.1) 246 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: audit 2026-03-09T18:46:26.962908+0000 mgr.y (mgr.44107) 228 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: audit 2026-03-09T18:46:26.962908+0000 mgr.y (mgr.44107) 228 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: cephadm 2026-03-09T18:46:26.963653+0000 mgr.y (mgr.44107) 229 : cephadm [INF] Upgrade: osd.1 is safe to restart 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: cephadm 2026-03-09T18:46:26.963653+0000 mgr.y (mgr.44107) 229 : cephadm [INF] Upgrade: osd.1 is safe to restart 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: audit 2026-03-09T18:46:27.127320+0000 mgr.y (mgr.44107) 230 : audit [DBG] from='client.54369 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: audit 2026-03-09T18:46:27.127320+0000 mgr.y (mgr.44107) 230 : audit [DBG] from='client.54369 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: audit 2026-03-09T18:46:27.443929+0000 mon.c (mon.1) 247 : audit [DBG] from='client.? 192.168.123.100:0/4069832492' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: audit 2026-03-09T18:46:27.443929+0000 mon.c (mon.1) 247 : audit [DBG] from='client.? 192.168.123.100:0/4069832492' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: audit 2026-03-09T18:46:27.499646+0000 mon.a (mon.0) 326 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: audit 2026-03-09T18:46:27.499646+0000 mon.a (mon.0) 326 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: audit 2026-03-09T18:46:27.503431+0000 mon.c (mon.1) 248 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: audit 2026-03-09T18:46:27.503431+0000 mon.c (mon.1) 248 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: audit 2026-03-09T18:46:27.504112+0000 mon.c (mon.1) 249 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:27.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:27 vm00 bash[69512]: audit 2026-03-09T18:46:27.504112+0000 mon.c (mon.1) 249 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: cephadm 2026-03-09T18:46:26.522514+0000 mgr.y (mgr.44107) 221 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: cephadm 2026-03-09T18:46:26.522514+0000 mgr.y (mgr.44107) 221 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: cephadm 2026-03-09T18:46:26.522542+0000 mgr.y (mgr.44107) 222 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: cephadm 2026-03-09T18:46:26.522542+0000 mgr.y (mgr.44107) 222 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: cephadm 2026-03-09T18:46:26.525820+0000 mgr.y (mgr.44107) 223 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: cephadm 2026-03-09T18:46:26.525820+0000 mgr.y (mgr.44107) 223 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: audit 2026-03-09T18:46:26.588429+0000 mon.c (mon.1) 244 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: audit 2026-03-09T18:46:26.588429+0000 mon.c (mon.1) 244 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: cephadm 2026-03-09T18:46:26.589026+0000 mgr.y (mgr.44107) 224 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: cephadm 2026-03-09T18:46:26.589026+0000 mgr.y (mgr.44107) 224 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: audit 2026-03-09T18:46:26.654181+0000 mgr.y (mgr.44107) 225 : audit [DBG] from='client.44350 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: audit 2026-03-09T18:46:26.654181+0000 mgr.y (mgr.44107) 225 : audit [DBG] from='client.44350 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: audit 2026-03-09T18:46:26.842710+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: audit 2026-03-09T18:46:26.842710+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: audit 2026-03-09T18:46:26.863425+0000 mon.c (mon.1) 245 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: audit 2026-03-09T18:46:26.863425+0000 mon.c (mon.1) 245 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: cephadm 2026-03-09T18:46:26.864171+0000 mgr.y (mgr.44107) 226 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: cephadm 2026-03-09T18:46:26.864171+0000 mgr.y (mgr.44107) 226 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: audit 2026-03-09T18:46:26.923090+0000 mgr.y (mgr.44107) 227 : audit [DBG] from='client.44356 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: audit 2026-03-09T18:46:26.923090+0000 mgr.y (mgr.44107) 227 : audit [DBG] from='client.44356 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: audit 2026-03-09T18:46:26.937330+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: audit 2026-03-09T18:46:26.937330+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: audit 2026-03-09T18:46:26.962771+0000 mon.c (mon.1) 246 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: audit 2026-03-09T18:46:26.962771+0000 mon.c (mon.1) 246 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: audit 2026-03-09T18:46:26.962908+0000 mgr.y (mgr.44107) 228 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: audit 2026-03-09T18:46:26.962908+0000 mgr.y (mgr.44107) 228 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: cephadm 2026-03-09T18:46:26.963653+0000 mgr.y (mgr.44107) 229 : cephadm [INF] Upgrade: osd.1 is safe to restart 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: cephadm 2026-03-09T18:46:26.963653+0000 mgr.y (mgr.44107) 229 : cephadm [INF] Upgrade: osd.1 is safe to restart 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: audit 2026-03-09T18:46:27.127320+0000 mgr.y (mgr.44107) 230 : audit [DBG] from='client.54369 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: audit 2026-03-09T18:46:27.127320+0000 mgr.y (mgr.44107) 230 : audit [DBG] from='client.54369 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: audit 2026-03-09T18:46:27.443929+0000 mon.c (mon.1) 247 : audit [DBG] from='client.? 192.168.123.100:0/4069832492' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: audit 2026-03-09T18:46:27.443929+0000 mon.c (mon.1) 247 : audit [DBG] from='client.? 192.168.123.100:0/4069832492' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: audit 2026-03-09T18:46:27.499646+0000 mon.a (mon.0) 326 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: audit 2026-03-09T18:46:27.499646+0000 mon.a (mon.0) 326 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: audit 2026-03-09T18:46:27.503431+0000 mon.c (mon.1) 248 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: audit 2026-03-09T18:46:27.503431+0000 mon.c (mon.1) 248 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: audit 2026-03-09T18:46:27.504112+0000 mon.c (mon.1) 249 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:27.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:27 vm00 bash[65531]: audit 2026-03-09T18:46:27.504112+0000 mon.c (mon.1) 249 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:27.675 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:46:27.675 INFO:teuthology.orchestra.run.vm00.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T18:46:27.675 INFO:teuthology.orchestra.run.vm00.stdout: "in_progress": true, 2026-03-09T18:46:27.675 INFO:teuthology.orchestra.run.vm00.stdout: "which": "Upgrading daemons of type(s) crash,osd", 2026-03-09T18:46:27.675 INFO:teuthology.orchestra.run.vm00.stdout: "services_complete": [], 2026-03-09T18:46:27.675 INFO:teuthology.orchestra.run.vm00.stdout: "progress": "3/8 daemons upgraded", 2026-03-09T18:46:27.675 INFO:teuthology.orchestra.run.vm00.stdout: "message": "Currently upgrading osd daemons", 2026-03-09T18:46:27.675 INFO:teuthology.orchestra.run.vm00.stdout: "is_paused": false 2026-03-09T18:46:27.675 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:46:27.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: cephadm 2026-03-09T18:46:26.522514+0000 mgr.y (mgr.44107) 221 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:46:27.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: cephadm 2026-03-09T18:46:26.522514+0000 mgr.y (mgr.44107) 221 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:46:27.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: cephadm 2026-03-09T18:46:26.522542+0000 mgr.y (mgr.44107) 222 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:46:27.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: cephadm 2026-03-09T18:46:26.522542+0000 mgr.y (mgr.44107) 222 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:46:27.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: cephadm 2026-03-09T18:46:26.525820+0000 mgr.y (mgr.44107) 223 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:46:27.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: cephadm 2026-03-09T18:46:26.525820+0000 mgr.y (mgr.44107) 223 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:46:27.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: audit 2026-03-09T18:46:26.588429+0000 mon.c (mon.1) 244 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:27.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: audit 2026-03-09T18:46:26.588429+0000 mon.c (mon.1) 244 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:27.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: cephadm 2026-03-09T18:46:26.589026+0000 mgr.y (mgr.44107) 224 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:46:27.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: cephadm 2026-03-09T18:46:26.589026+0000 mgr.y (mgr.44107) 224 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:46:27.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: audit 2026-03-09T18:46:26.654181+0000 mgr.y (mgr.44107) 225 : audit [DBG] from='client.44350 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:27.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: audit 2026-03-09T18:46:26.654181+0000 mgr.y (mgr.44107) 225 : audit [DBG] from='client.44350 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:27.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: audit 2026-03-09T18:46:26.842710+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: audit 2026-03-09T18:46:26.842710+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: audit 2026-03-09T18:46:26.863425+0000 mon.c (mon.1) 245 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: audit 2026-03-09T18:46:26.863425+0000 mon.c (mon.1) 245 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: cephadm 2026-03-09T18:46:26.864171+0000 mgr.y (mgr.44107) 226 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:46:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: cephadm 2026-03-09T18:46:26.864171+0000 mgr.y (mgr.44107) 226 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:46:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: audit 2026-03-09T18:46:26.923090+0000 mgr.y (mgr.44107) 227 : audit [DBG] from='client.44356 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: audit 2026-03-09T18:46:26.923090+0000 mgr.y (mgr.44107) 227 : audit [DBG] from='client.44356 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: audit 2026-03-09T18:46:26.937330+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: audit 2026-03-09T18:46:26.937330+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: audit 2026-03-09T18:46:26.962771+0000 mon.c (mon.1) 246 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T18:46:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: audit 2026-03-09T18:46:26.962771+0000 mon.c (mon.1) 246 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T18:46:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: audit 2026-03-09T18:46:26.962908+0000 mgr.y (mgr.44107) 228 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T18:46:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: audit 2026-03-09T18:46:26.962908+0000 mgr.y (mgr.44107) 228 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T18:46:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: cephadm 2026-03-09T18:46:26.963653+0000 mgr.y (mgr.44107) 229 : cephadm [INF] Upgrade: osd.1 is safe to restart 2026-03-09T18:46:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: cephadm 2026-03-09T18:46:26.963653+0000 mgr.y (mgr.44107) 229 : cephadm [INF] Upgrade: osd.1 is safe to restart 2026-03-09T18:46:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: audit 2026-03-09T18:46:27.127320+0000 mgr.y (mgr.44107) 230 : audit [DBG] from='client.54369 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: audit 2026-03-09T18:46:27.127320+0000 mgr.y (mgr.44107) 230 : audit [DBG] from='client.54369 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: audit 2026-03-09T18:46:27.443929+0000 mon.c (mon.1) 247 : audit [DBG] from='client.? 192.168.123.100:0/4069832492' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: audit 2026-03-09T18:46:27.443929+0000 mon.c (mon.1) 247 : audit [DBG] from='client.? 192.168.123.100:0/4069832492' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: audit 2026-03-09T18:46:27.499646+0000 mon.a (mon.0) 326 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: audit 2026-03-09T18:46:27.499646+0000 mon.a (mon.0) 326 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: audit 2026-03-09T18:46:27.503431+0000 mon.c (mon.1) 248 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T18:46:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: audit 2026-03-09T18:46:27.503431+0000 mon.c (mon.1) 248 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T18:46:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: audit 2026-03-09T18:46:27.504112+0000 mon.c (mon.1) 249 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:27.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:27 vm08 bash[46122]: audit 2026-03-09T18:46:27.504112+0000 mon.c (mon.1) 249 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:28.611 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:28 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:28.611 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:28 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:28.611 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:46:28 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:28.611 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:46:28 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:28.612 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:46:28 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:28.612 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:46:28 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:28.612 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:46:28 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:28.612 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:46:28 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:28.612 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:46:28 vm00 systemd[1]: Stopping Ceph osd.1 for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:46:28.612 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:46:28 vm00 bash[28319]: debug 2026-03-09T18:46:28.392+0000 7f817c7f6700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T18:46:28.612 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:46:28 vm00 bash[28319]: debug 2026-03-09T18:46:28.392+0000 7f817c7f6700 -1 osd.1 116 *** Got signal Terminated *** 2026-03-09T18:46:28.612 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:46:28 vm00 bash[28319]: debug 2026-03-09T18:46:28.392+0000 7f817c7f6700 -1 osd.1 116 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:46:28.612 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:46:28 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:28.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:28 vm00 bash[65531]: cephadm 2026-03-09T18:46:27.495065+0000 mgr.y (mgr.44107) 231 : cephadm [INF] Upgrade: Updating osd.1 2026-03-09T18:46:28.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:28 vm00 bash[65531]: cephadm 2026-03-09T18:46:27.495065+0000 mgr.y (mgr.44107) 231 : cephadm [INF] Upgrade: Updating osd.1 2026-03-09T18:46:28.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:28 vm00 bash[65531]: cephadm 2026-03-09T18:46:27.505520+0000 mgr.y (mgr.44107) 232 : cephadm [INF] Deploying daemon osd.1 on vm00 2026-03-09T18:46:28.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:28 vm00 bash[65531]: cephadm 2026-03-09T18:46:27.505520+0000 mgr.y (mgr.44107) 232 : cephadm [INF] Deploying daemon osd.1 on vm00 2026-03-09T18:46:28.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:28 vm00 bash[65531]: audit 2026-03-09T18:46:27.678380+0000 mgr.y (mgr.44107) 233 : audit [DBG] from='client.44374 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:28.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:28 vm00 bash[65531]: audit 2026-03-09T18:46:27.678380+0000 mgr.y (mgr.44107) 233 : audit [DBG] from='client.44374 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:28.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:28 vm00 bash[65531]: cluster 2026-03-09T18:46:28.082532+0000 mgr.y (mgr.44107) 234 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:28.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:28 vm00 bash[65531]: cluster 2026-03-09T18:46:28.082532+0000 mgr.y (mgr.44107) 234 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:28.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:28 vm00 bash[65531]: cluster 2026-03-09T18:46:28.394041+0000 mon.a (mon.0) 327 : cluster [INF] osd.1 marked itself down and dead 2026-03-09T18:46:28.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:28 vm00 bash[65531]: cluster 2026-03-09T18:46:28.394041+0000 mon.a (mon.0) 327 : cluster [INF] osd.1 marked itself down and dead 2026-03-09T18:46:28.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:28 vm00 bash[69512]: cephadm 2026-03-09T18:46:27.495065+0000 mgr.y (mgr.44107) 231 : cephadm [INF] Upgrade: Updating osd.1 2026-03-09T18:46:28.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:28 vm00 bash[69512]: cephadm 2026-03-09T18:46:27.495065+0000 mgr.y (mgr.44107) 231 : cephadm [INF] Upgrade: Updating osd.1 2026-03-09T18:46:28.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:28 vm00 bash[69512]: cephadm 2026-03-09T18:46:27.505520+0000 mgr.y (mgr.44107) 232 : cephadm [INF] Deploying daemon osd.1 on vm00 2026-03-09T18:46:28.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:28 vm00 bash[69512]: cephadm 2026-03-09T18:46:27.505520+0000 mgr.y (mgr.44107) 232 : cephadm [INF] Deploying daemon osd.1 on vm00 2026-03-09T18:46:28.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:28 vm00 bash[69512]: audit 2026-03-09T18:46:27.678380+0000 mgr.y (mgr.44107) 233 : audit [DBG] from='client.44374 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:28.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:28 vm00 bash[69512]: audit 2026-03-09T18:46:27.678380+0000 mgr.y (mgr.44107) 233 : audit [DBG] from='client.44374 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:28.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:28 vm00 bash[69512]: cluster 2026-03-09T18:46:28.082532+0000 mgr.y (mgr.44107) 234 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:28.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:28 vm00 bash[69512]: cluster 2026-03-09T18:46:28.082532+0000 mgr.y (mgr.44107) 234 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:28.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:28 vm00 bash[69512]: cluster 2026-03-09T18:46:28.394041+0000 mon.a (mon.0) 327 : cluster [INF] osd.1 marked itself down and dead 2026-03-09T18:46:28.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:28 vm00 bash[69512]: cluster 2026-03-09T18:46:28.394041+0000 mon.a (mon.0) 327 : cluster [INF] osd.1 marked itself down and dead 2026-03-09T18:46:28.879 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:46:28 vm00 bash[93626]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-osd-1 2026-03-09T18:46:28.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:28 vm08 bash[46122]: cephadm 2026-03-09T18:46:27.495065+0000 mgr.y (mgr.44107) 231 : cephadm [INF] Upgrade: Updating osd.1 2026-03-09T18:46:28.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:28 vm08 bash[46122]: cephadm 2026-03-09T18:46:27.495065+0000 mgr.y (mgr.44107) 231 : cephadm [INF] Upgrade: Updating osd.1 2026-03-09T18:46:28.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:28 vm08 bash[46122]: cephadm 2026-03-09T18:46:27.505520+0000 mgr.y (mgr.44107) 232 : cephadm [INF] Deploying daemon osd.1 on vm00 2026-03-09T18:46:28.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:28 vm08 bash[46122]: cephadm 2026-03-09T18:46:27.505520+0000 mgr.y (mgr.44107) 232 : cephadm [INF] Deploying daemon osd.1 on vm00 2026-03-09T18:46:28.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:28 vm08 bash[46122]: audit 2026-03-09T18:46:27.678380+0000 mgr.y (mgr.44107) 233 : audit [DBG] from='client.44374 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:28.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:28 vm08 bash[46122]: audit 2026-03-09T18:46:27.678380+0000 mgr.y (mgr.44107) 233 : audit [DBG] from='client.44374 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:28.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:28 vm08 bash[46122]: cluster 2026-03-09T18:46:28.082532+0000 mgr.y (mgr.44107) 234 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:28.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:28 vm08 bash[46122]: cluster 2026-03-09T18:46:28.082532+0000 mgr.y (mgr.44107) 234 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:28.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:28 vm08 bash[46122]: cluster 2026-03-09T18:46:28.394041+0000 mon.a (mon.0) 327 : cluster [INF] osd.1 marked itself down and dead 2026-03-09T18:46:28.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:28 vm08 bash[46122]: cluster 2026-03-09T18:46:28.394041+0000 mon.a (mon.0) 327 : cluster [INF] osd.1 marked itself down and dead 2026-03-09T18:46:29.183 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:46:29 vm00 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.1.service: Deactivated successfully. 2026-03-09T18:46:29.183 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:46:29 vm00 systemd[1]: Stopped Ceph osd.1 for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:46:29.495 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:29 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:29.495 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:46:29 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:29.496 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:29 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:29.496 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:46:29 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:29.496 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:46:29 vm00 systemd[1]: Started Ceph osd.1 for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:46:29.496 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:46:29 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:29.496 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:46:29 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:29.496 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:46:29 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:29.496 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:46:29 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:29.496 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:46:29 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:29.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:29 vm00 bash[65531]: cluster 2026-03-09T18:46:28.614094+0000 mon.a (mon.0) 328 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:46:29.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:29 vm00 bash[65531]: cluster 2026-03-09T18:46:28.614094+0000 mon.a (mon.0) 328 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:46:29.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:29 vm00 bash[65531]: cluster 2026-03-09T18:46:28.677657+0000 mon.a (mon.0) 329 : cluster [DBG] osdmap e117: 8 total, 7 up, 8 in 2026-03-09T18:46:29.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:29 vm00 bash[65531]: cluster 2026-03-09T18:46:28.677657+0000 mon.a (mon.0) 329 : cluster [DBG] osdmap e117: 8 total, 7 up, 8 in 2026-03-09T18:46:29.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:29 vm00 bash[65531]: audit 2026-03-09T18:46:29.441025+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:29.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:29 vm00 bash[65531]: audit 2026-03-09T18:46:29.441025+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:29.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:29 vm00 bash[65531]: audit 2026-03-09T18:46:29.453217+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:29.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:29 vm00 bash[65531]: audit 2026-03-09T18:46:29.453217+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:29.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:29 vm00 bash[65531]: audit 2026-03-09T18:46:29.454771+0000 mon.c (mon.1) 250 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:29.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:29 vm00 bash[65531]: audit 2026-03-09T18:46:29.454771+0000 mon.c (mon.1) 250 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:29.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:29 vm00 bash[69512]: cluster 2026-03-09T18:46:28.614094+0000 mon.a (mon.0) 328 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:46:29.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:29 vm00 bash[69512]: cluster 2026-03-09T18:46:28.614094+0000 mon.a (mon.0) 328 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:46:29.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:29 vm00 bash[69512]: cluster 2026-03-09T18:46:28.677657+0000 mon.a (mon.0) 329 : cluster [DBG] osdmap e117: 8 total, 7 up, 8 in 2026-03-09T18:46:29.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:29 vm00 bash[69512]: cluster 2026-03-09T18:46:28.677657+0000 mon.a (mon.0) 329 : cluster [DBG] osdmap e117: 8 total, 7 up, 8 in 2026-03-09T18:46:29.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:29 vm00 bash[69512]: audit 2026-03-09T18:46:29.441025+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:29.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:29 vm00 bash[69512]: audit 2026-03-09T18:46:29.441025+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:29.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:29 vm00 bash[69512]: audit 2026-03-09T18:46:29.453217+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:29.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:29 vm00 bash[69512]: audit 2026-03-09T18:46:29.453217+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:29.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:29 vm00 bash[69512]: audit 2026-03-09T18:46:29.454771+0000 mon.c (mon.1) 250 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:29.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:29 vm00 bash[69512]: audit 2026-03-09T18:46:29.454771+0000 mon.c (mon.1) 250 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:29.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:46:29 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:46:29] "GET /metrics HTTP/1.1" 200 37828 "" "Prometheus/2.51.0" 2026-03-09T18:46:29.879 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:46:29 vm00 bash[93838]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:46:29.879 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:46:29 vm00 bash[93838]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:46:29.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:29 vm08 bash[46122]: cluster 2026-03-09T18:46:28.614094+0000 mon.a (mon.0) 328 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:46:29.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:29 vm08 bash[46122]: cluster 2026-03-09T18:46:28.614094+0000 mon.a (mon.0) 328 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:46:29.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:29 vm08 bash[46122]: cluster 2026-03-09T18:46:28.677657+0000 mon.a (mon.0) 329 : cluster [DBG] osdmap e117: 8 total, 7 up, 8 in 2026-03-09T18:46:29.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:29 vm08 bash[46122]: cluster 2026-03-09T18:46:28.677657+0000 mon.a (mon.0) 329 : cluster [DBG] osdmap e117: 8 total, 7 up, 8 in 2026-03-09T18:46:29.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:29 vm08 bash[46122]: audit 2026-03-09T18:46:29.441025+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:29.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:29 vm08 bash[46122]: audit 2026-03-09T18:46:29.441025+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:29.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:29 vm08 bash[46122]: audit 2026-03-09T18:46:29.453217+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:29.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:29 vm08 bash[46122]: audit 2026-03-09T18:46:29.453217+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:29.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:29 vm08 bash[46122]: audit 2026-03-09T18:46:29.454771+0000 mon.c (mon.1) 250 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:29.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:29 vm08 bash[46122]: audit 2026-03-09T18:46:29.454771+0000 mon.c (mon.1) 250 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:30 vm00 bash[65531]: cluster 2026-03-09T18:46:29.675669+0000 mon.a (mon.0) 332 : cluster [DBG] osdmap e118: 8 total, 7 up, 8 in 2026-03-09T18:46:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:30 vm00 bash[65531]: cluster 2026-03-09T18:46:29.675669+0000 mon.a (mon.0) 332 : cluster [DBG] osdmap e118: 8 total, 7 up, 8 in 2026-03-09T18:46:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:30 vm00 bash[65531]: cluster 2026-03-09T18:46:30.082979+0000 mgr.y (mgr.44107) 235 : cluster [DBG] pgmap v101: 161 pgs: 4 active+undersized, 37 peering, 6 stale+active+clean, 2 active+undersized+degraded, 112 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 0 op/s; 7/627 objects degraded (1.116%) 2026-03-09T18:46:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:30 vm00 bash[65531]: cluster 2026-03-09T18:46:30.082979+0000 mgr.y (mgr.44107) 235 : cluster [DBG] pgmap v101: 161 pgs: 4 active+undersized, 37 peering, 6 stale+active+clean, 2 active+undersized+degraded, 112 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 0 op/s; 7/627 objects degraded (1.116%) 2026-03-09T18:46:30.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:30 vm00 bash[69512]: cluster 2026-03-09T18:46:29.675669+0000 mon.a (mon.0) 332 : cluster [DBG] osdmap e118: 8 total, 7 up, 8 in 2026-03-09T18:46:30.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:30 vm00 bash[69512]: cluster 2026-03-09T18:46:29.675669+0000 mon.a (mon.0) 332 : cluster [DBG] osdmap e118: 8 total, 7 up, 8 in 2026-03-09T18:46:30.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:30 vm00 bash[69512]: cluster 2026-03-09T18:46:30.082979+0000 mgr.y (mgr.44107) 235 : cluster [DBG] pgmap v101: 161 pgs: 4 active+undersized, 37 peering, 6 stale+active+clean, 2 active+undersized+degraded, 112 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 0 op/s; 7/627 objects degraded (1.116%) 2026-03-09T18:46:30.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:30 vm00 bash[69512]: cluster 2026-03-09T18:46:30.082979+0000 mgr.y (mgr.44107) 235 : cluster [DBG] pgmap v101: 161 pgs: 4 active+undersized, 37 peering, 6 stale+active+clean, 2 active+undersized+degraded, 112 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 0 op/s; 7/627 objects degraded (1.116%) 2026-03-09T18:46:30.879 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:46:30 vm00 bash[93838]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T18:46:30.879 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:46:30 vm00 bash[93838]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:46:30.879 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:46:30 vm00 bash[93838]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:46:30.879 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:46:30 vm00 bash[93838]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-09T18:46:30.879 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:46:30 vm00 bash[93838]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-efb67db2-9857-4d61-8bce-aad6b3e85734/osd-block-9fc1e6b3-451c-497e-a994-131046179fb9 --path /var/lib/ceph/osd/ceph-1 --no-mon-config 2026-03-09T18:46:30.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:30 vm08 bash[46122]: cluster 2026-03-09T18:46:29.675669+0000 mon.a (mon.0) 332 : cluster [DBG] osdmap e118: 8 total, 7 up, 8 in 2026-03-09T18:46:30.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:30 vm08 bash[46122]: cluster 2026-03-09T18:46:29.675669+0000 mon.a (mon.0) 332 : cluster [DBG] osdmap e118: 8 total, 7 up, 8 in 2026-03-09T18:46:30.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:30 vm08 bash[46122]: cluster 2026-03-09T18:46:30.082979+0000 mgr.y (mgr.44107) 235 : cluster [DBG] pgmap v101: 161 pgs: 4 active+undersized, 37 peering, 6 stale+active+clean, 2 active+undersized+degraded, 112 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 0 op/s; 7/627 objects degraded (1.116%) 2026-03-09T18:46:30.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:30 vm08 bash[46122]: cluster 2026-03-09T18:46:30.082979+0000 mgr.y (mgr.44107) 235 : cluster [DBG] pgmap v101: 161 pgs: 4 active+undersized, 37 peering, 6 stale+active+clean, 2 active+undersized+degraded, 112 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 0 op/s; 7/627 objects degraded (1.116%) 2026-03-09T18:46:31.379 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:46:30 vm00 bash[93838]: Running command: /usr/bin/ln -snf /dev/ceph-efb67db2-9857-4d61-8bce-aad6b3e85734/osd-block-9fc1e6b3-451c-497e-a994-131046179fb9 /var/lib/ceph/osd/ceph-1/block 2026-03-09T18:46:31.379 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:46:30 vm00 bash[93838]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block 2026-03-09T18:46:31.379 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:46:30 vm00 bash[93838]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 2026-03-09T18:46:31.379 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:46:30 vm00 bash[93838]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-09T18:46:31.379 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:46:30 vm00 bash[93838]: --> ceph-volume lvm activate successful for osd ID: 1 2026-03-09T18:46:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:31 vm00 bash[65531]: cluster 2026-03-09T18:46:30.663966+0000 mon.a (mon.0) 333 : cluster [WRN] Health check failed: Reduced data availability: 8 pgs inactive, 9 pgs peering (PG_AVAILABILITY) 2026-03-09T18:46:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:31 vm00 bash[65531]: cluster 2026-03-09T18:46:30.663966+0000 mon.a (mon.0) 333 : cluster [WRN] Health check failed: Reduced data availability: 8 pgs inactive, 9 pgs peering (PG_AVAILABILITY) 2026-03-09T18:46:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:31 vm00 bash[65531]: cluster 2026-03-09T18:46:30.663981+0000 mon.a (mon.0) 334 : cluster [WRN] Health check failed: Degraded data redundancy: 7/627 objects degraded (1.116%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:31.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:31 vm00 bash[65531]: cluster 2026-03-09T18:46:30.663981+0000 mon.a (mon.0) 334 : cluster [WRN] Health check failed: Degraded data redundancy: 7/627 objects degraded (1.116%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:31.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:31 vm00 bash[69512]: cluster 2026-03-09T18:46:30.663966+0000 mon.a (mon.0) 333 : cluster [WRN] Health check failed: Reduced data availability: 8 pgs inactive, 9 pgs peering (PG_AVAILABILITY) 2026-03-09T18:46:31.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:31 vm00 bash[69512]: cluster 2026-03-09T18:46:30.663966+0000 mon.a (mon.0) 333 : cluster [WRN] Health check failed: Reduced data availability: 8 pgs inactive, 9 pgs peering (PG_AVAILABILITY) 2026-03-09T18:46:31.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:31 vm00 bash[69512]: cluster 2026-03-09T18:46:30.663981+0000 mon.a (mon.0) 334 : cluster [WRN] Health check failed: Degraded data redundancy: 7/627 objects degraded (1.116%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:31.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:31 vm00 bash[69512]: cluster 2026-03-09T18:46:30.663981+0000 mon.a (mon.0) 334 : cluster [WRN] Health check failed: Degraded data redundancy: 7/627 objects degraded (1.116%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:31.879 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:46:31 vm00 bash[94183]: debug 2026-03-09T18:46:31.780+0000 7f8c2e471740 -1 Falling back to public interface 2026-03-09T18:46:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:31 vm08 bash[46122]: cluster 2026-03-09T18:46:30.663966+0000 mon.a (mon.0) 333 : cluster [WRN] Health check failed: Reduced data availability: 8 pgs inactive, 9 pgs peering (PG_AVAILABILITY) 2026-03-09T18:46:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:31 vm08 bash[46122]: cluster 2026-03-09T18:46:30.663966+0000 mon.a (mon.0) 333 : cluster [WRN] Health check failed: Reduced data availability: 8 pgs inactive, 9 pgs peering (PG_AVAILABILITY) 2026-03-09T18:46:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:31 vm08 bash[46122]: cluster 2026-03-09T18:46:30.663981+0000 mon.a (mon.0) 334 : cluster [WRN] Health check failed: Degraded data redundancy: 7/627 objects degraded (1.116%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:31.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:31 vm08 bash[46122]: cluster 2026-03-09T18:46:30.663981+0000 mon.a (mon.0) 334 : cluster [WRN] Health check failed: Degraded data redundancy: 7/627 objects degraded (1.116%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:33.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:32 vm00 bash[65531]: audit 2026-03-09T18:46:31.565649+0000 mgr.y (mgr.44107) 236 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:33.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:32 vm00 bash[65531]: audit 2026-03-09T18:46:31.565649+0000 mgr.y (mgr.44107) 236 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:33.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:32 vm00 bash[65531]: cluster 2026-03-09T18:46:32.083467+0000 mgr.y (mgr.44107) 237 : cluster [DBG] pgmap v102: 161 pgs: 11 active+undersized, 37 peering, 3 stale+active+clean, 8 active+undersized+degraded, 102 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 28/627 objects degraded (4.466%) 2026-03-09T18:46:33.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:32 vm00 bash[65531]: cluster 2026-03-09T18:46:32.083467+0000 mgr.y (mgr.44107) 237 : cluster [DBG] pgmap v102: 161 pgs: 11 active+undersized, 37 peering, 3 stale+active+clean, 8 active+undersized+degraded, 102 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 28/627 objects degraded (4.466%) 2026-03-09T18:46:33.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:32 vm00 bash[69512]: audit 2026-03-09T18:46:31.565649+0000 mgr.y (mgr.44107) 236 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:33.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:32 vm00 bash[69512]: audit 2026-03-09T18:46:31.565649+0000 mgr.y (mgr.44107) 236 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:33.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:32 vm00 bash[69512]: cluster 2026-03-09T18:46:32.083467+0000 mgr.y (mgr.44107) 237 : cluster [DBG] pgmap v102: 161 pgs: 11 active+undersized, 37 peering, 3 stale+active+clean, 8 active+undersized+degraded, 102 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 28/627 objects degraded (4.466%) 2026-03-09T18:46:33.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:32 vm00 bash[69512]: cluster 2026-03-09T18:46:32.083467+0000 mgr.y (mgr.44107) 237 : cluster [DBG] pgmap v102: 161 pgs: 11 active+undersized, 37 peering, 3 stale+active+clean, 8 active+undersized+degraded, 102 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 28/627 objects degraded (4.466%) 2026-03-09T18:46:33.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:32 vm08 bash[46122]: audit 2026-03-09T18:46:31.565649+0000 mgr.y (mgr.44107) 236 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:33.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:32 vm08 bash[46122]: audit 2026-03-09T18:46:31.565649+0000 mgr.y (mgr.44107) 236 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:33.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:32 vm08 bash[46122]: cluster 2026-03-09T18:46:32.083467+0000 mgr.y (mgr.44107) 237 : cluster [DBG] pgmap v102: 161 pgs: 11 active+undersized, 37 peering, 3 stale+active+clean, 8 active+undersized+degraded, 102 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 28/627 objects degraded (4.466%) 2026-03-09T18:46:33.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:32 vm08 bash[46122]: cluster 2026-03-09T18:46:32.083467+0000 mgr.y (mgr.44107) 237 : cluster [DBG] pgmap v102: 161 pgs: 11 active+undersized, 37 peering, 3 stale+active+clean, 8 active+undersized+degraded, 102 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 28/627 objects degraded (4.466%) 2026-03-09T18:46:33.879 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:46:33 vm00 bash[94183]: debug 2026-03-09T18:46:33.444+0000 7f8c2e471740 -1 osd.1 0 read_superblock omap replica is missing. 2026-03-09T18:46:33.879 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:46:33 vm00 bash[94183]: debug 2026-03-09T18:46:33.484+0000 7f8c2e471740 -1 osd.1 116 log_to_monitors true 2026-03-09T18:46:34.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:34 vm08 bash[46122]: audit 2026-03-09T18:46:33.140110+0000 mon.a (mon.0) 335 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:34.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:34 vm08 bash[46122]: audit 2026-03-09T18:46:33.140110+0000 mon.a (mon.0) 335 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:34.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:34 vm08 bash[46122]: audit 2026-03-09T18:46:33.160591+0000 mon.c (mon.1) 251 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:46:34.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:34 vm08 bash[46122]: audit 2026-03-09T18:46:33.160591+0000 mon.c (mon.1) 251 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:46:34.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:34 vm08 bash[46122]: audit 2026-03-09T18:46:33.490721+0000 mon.c (mon.1) 252 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/3556877086,v1:192.168.123.100:6811/3556877086]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:46:34.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:34 vm08 bash[46122]: audit 2026-03-09T18:46:33.490721+0000 mon.c (mon.1) 252 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/3556877086,v1:192.168.123.100:6811/3556877086]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:46:34.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:34 vm08 bash[46122]: audit 2026-03-09T18:46:33.491240+0000 mon.a (mon.0) 336 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:46:34.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:34 vm08 bash[46122]: audit 2026-03-09T18:46:33.491240+0000 mon.a (mon.0) 336 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:46:34.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:34 vm00 bash[65531]: audit 2026-03-09T18:46:33.140110+0000 mon.a (mon.0) 335 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:34.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:34 vm00 bash[65531]: audit 2026-03-09T18:46:33.140110+0000 mon.a (mon.0) 335 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:34.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:34 vm00 bash[65531]: audit 2026-03-09T18:46:33.160591+0000 mon.c (mon.1) 251 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:46:34.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:34 vm00 bash[65531]: audit 2026-03-09T18:46:33.160591+0000 mon.c (mon.1) 251 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:46:34.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:34 vm00 bash[65531]: audit 2026-03-09T18:46:33.490721+0000 mon.c (mon.1) 252 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/3556877086,v1:192.168.123.100:6811/3556877086]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:46:34.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:34 vm00 bash[65531]: audit 2026-03-09T18:46:33.490721+0000 mon.c (mon.1) 252 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/3556877086,v1:192.168.123.100:6811/3556877086]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:46:34.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:34 vm00 bash[65531]: audit 2026-03-09T18:46:33.491240+0000 mon.a (mon.0) 336 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:46:34.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:34 vm00 bash[65531]: audit 2026-03-09T18:46:33.491240+0000 mon.a (mon.0) 336 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:46:34.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:34 vm00 bash[69512]: audit 2026-03-09T18:46:33.140110+0000 mon.a (mon.0) 335 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:34.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:34 vm00 bash[69512]: audit 2026-03-09T18:46:33.140110+0000 mon.a (mon.0) 335 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:34.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:34 vm00 bash[69512]: audit 2026-03-09T18:46:33.160591+0000 mon.c (mon.1) 251 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:46:34.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:34 vm00 bash[69512]: audit 2026-03-09T18:46:33.160591+0000 mon.c (mon.1) 251 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:46:34.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:34 vm00 bash[69512]: audit 2026-03-09T18:46:33.490721+0000 mon.c (mon.1) 252 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/3556877086,v1:192.168.123.100:6811/3556877086]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:46:34.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:34 vm00 bash[69512]: audit 2026-03-09T18:46:33.490721+0000 mon.c (mon.1) 252 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/3556877086,v1:192.168.123.100:6811/3556877086]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:46:34.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:34 vm00 bash[69512]: audit 2026-03-09T18:46:33.491240+0000 mon.a (mon.0) 336 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:46:34.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:34 vm00 bash[69512]: audit 2026-03-09T18:46:33.491240+0000 mon.a (mon.0) 336 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:46:34.629 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:46:34 vm00 bash[94183]: debug 2026-03-09T18:46:34.176+0000 7f8c2621c640 -1 osd.1 116 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:46:35.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:35 vm08 bash[46122]: cluster 2026-03-09T18:46:34.083848+0000 mgr.y (mgr.44107) 238 : cluster [DBG] pgmap v103: 161 pgs: 15 active+undersized, 37 peering, 11 active+undersized+degraded, 98 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 48/627 objects degraded (7.656%) 2026-03-09T18:46:35.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:35 vm08 bash[46122]: cluster 2026-03-09T18:46:34.083848+0000 mgr.y (mgr.44107) 238 : cluster [DBG] pgmap v103: 161 pgs: 15 active+undersized, 37 peering, 11 active+undersized+degraded, 98 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 48/627 objects degraded (7.656%) 2026-03-09T18:46:35.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:35 vm08 bash[46122]: audit 2026-03-09T18:46:34.146429+0000 mon.a (mon.0) 337 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T18:46:35.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:35 vm08 bash[46122]: audit 2026-03-09T18:46:34.146429+0000 mon.a (mon.0) 337 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T18:46:35.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:35 vm08 bash[46122]: cluster 2026-03-09T18:46:34.152829+0000 mon.a (mon.0) 338 : cluster [DBG] osdmap e119: 8 total, 7 up, 8 in 2026-03-09T18:46:35.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:35 vm08 bash[46122]: cluster 2026-03-09T18:46:34.152829+0000 mon.a (mon.0) 338 : cluster [DBG] osdmap e119: 8 total, 7 up, 8 in 2026-03-09T18:46:35.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:35 vm08 bash[46122]: audit 2026-03-09T18:46:34.156089+0000 mon.c (mon.1) 253 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/3556877086,v1:192.168.123.100:6811/3556877086]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:46:35.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:35 vm08 bash[46122]: audit 2026-03-09T18:46:34.156089+0000 mon.c (mon.1) 253 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/3556877086,v1:192.168.123.100:6811/3556877086]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:46:35.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:35 vm08 bash[46122]: audit 2026-03-09T18:46:34.156376+0000 mon.a (mon.0) 339 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:46:35.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:35 vm08 bash[46122]: audit 2026-03-09T18:46:34.156376+0000 mon.a (mon.0) 339 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:46:35.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:35 vm00 bash[65531]: cluster 2026-03-09T18:46:34.083848+0000 mgr.y (mgr.44107) 238 : cluster [DBG] pgmap v103: 161 pgs: 15 active+undersized, 37 peering, 11 active+undersized+degraded, 98 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 48/627 objects degraded (7.656%) 2026-03-09T18:46:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:35 vm00 bash[65531]: cluster 2026-03-09T18:46:34.083848+0000 mgr.y (mgr.44107) 238 : cluster [DBG] pgmap v103: 161 pgs: 15 active+undersized, 37 peering, 11 active+undersized+degraded, 98 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 48/627 objects degraded (7.656%) 2026-03-09T18:46:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:35 vm00 bash[65531]: audit 2026-03-09T18:46:34.146429+0000 mon.a (mon.0) 337 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T18:46:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:35 vm00 bash[65531]: audit 2026-03-09T18:46:34.146429+0000 mon.a (mon.0) 337 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T18:46:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:35 vm00 bash[65531]: cluster 2026-03-09T18:46:34.152829+0000 mon.a (mon.0) 338 : cluster [DBG] osdmap e119: 8 total, 7 up, 8 in 2026-03-09T18:46:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:35 vm00 bash[65531]: cluster 2026-03-09T18:46:34.152829+0000 mon.a (mon.0) 338 : cluster [DBG] osdmap e119: 8 total, 7 up, 8 in 2026-03-09T18:46:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:35 vm00 bash[65531]: audit 2026-03-09T18:46:34.156089+0000 mon.c (mon.1) 253 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/3556877086,v1:192.168.123.100:6811/3556877086]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:46:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:35 vm00 bash[65531]: audit 2026-03-09T18:46:34.156089+0000 mon.c (mon.1) 253 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/3556877086,v1:192.168.123.100:6811/3556877086]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:46:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:35 vm00 bash[65531]: audit 2026-03-09T18:46:34.156376+0000 mon.a (mon.0) 339 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:46:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:35 vm00 bash[65531]: audit 2026-03-09T18:46:34.156376+0000 mon.a (mon.0) 339 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:46:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:35 vm00 bash[69512]: cluster 2026-03-09T18:46:34.083848+0000 mgr.y (mgr.44107) 238 : cluster [DBG] pgmap v103: 161 pgs: 15 active+undersized, 37 peering, 11 active+undersized+degraded, 98 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 48/627 objects degraded (7.656%) 2026-03-09T18:46:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:35 vm00 bash[69512]: cluster 2026-03-09T18:46:34.083848+0000 mgr.y (mgr.44107) 238 : cluster [DBG] pgmap v103: 161 pgs: 15 active+undersized, 37 peering, 11 active+undersized+degraded, 98 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 48/627 objects degraded (7.656%) 2026-03-09T18:46:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:35 vm00 bash[69512]: audit 2026-03-09T18:46:34.146429+0000 mon.a (mon.0) 337 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T18:46:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:35 vm00 bash[69512]: audit 2026-03-09T18:46:34.146429+0000 mon.a (mon.0) 337 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T18:46:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:35 vm00 bash[69512]: cluster 2026-03-09T18:46:34.152829+0000 mon.a (mon.0) 338 : cluster [DBG] osdmap e119: 8 total, 7 up, 8 in 2026-03-09T18:46:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:35 vm00 bash[69512]: cluster 2026-03-09T18:46:34.152829+0000 mon.a (mon.0) 338 : cluster [DBG] osdmap e119: 8 total, 7 up, 8 in 2026-03-09T18:46:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:35 vm00 bash[69512]: audit 2026-03-09T18:46:34.156089+0000 mon.c (mon.1) 253 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/3556877086,v1:192.168.123.100:6811/3556877086]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:46:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:35 vm00 bash[69512]: audit 2026-03-09T18:46:34.156089+0000 mon.c (mon.1) 253 : audit [INF] from='osd.1 [v2:192.168.123.100:6810/3556877086,v1:192.168.123.100:6811/3556877086]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:46:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:35 vm00 bash[69512]: audit 2026-03-09T18:46:34.156376+0000 mon.a (mon.0) 339 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:46:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:35 vm00 bash[69512]: audit 2026-03-09T18:46:34.156376+0000 mon.a (mon.0) 339 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-09T18:46:36.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:36 vm08 bash[46122]: cluster 2026-03-09T18:46:35.146840+0000 mon.a (mon.0) 340 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:46:36.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:36 vm08 bash[46122]: cluster 2026-03-09T18:46:35.146840+0000 mon.a (mon.0) 340 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:46:36.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:36 vm08 bash[46122]: cluster 2026-03-09T18:46:35.171644+0000 mon.a (mon.0) 341 : cluster [INF] osd.1 [v2:192.168.123.100:6810/3556877086,v1:192.168.123.100:6811/3556877086] boot 2026-03-09T18:46:36.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:36 vm08 bash[46122]: cluster 2026-03-09T18:46:35.171644+0000 mon.a (mon.0) 341 : cluster [INF] osd.1 [v2:192.168.123.100:6810/3556877086,v1:192.168.123.100:6811/3556877086] boot 2026-03-09T18:46:36.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:36 vm08 bash[46122]: cluster 2026-03-09T18:46:35.171656+0000 mon.a (mon.0) 342 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-09T18:46:36.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:36 vm08 bash[46122]: cluster 2026-03-09T18:46:35.171656+0000 mon.a (mon.0) 342 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-09T18:46:36.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:36 vm08 bash[46122]: audit 2026-03-09T18:46:35.183095+0000 mon.c (mon.1) 254 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:46:36.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:36 vm08 bash[46122]: audit 2026-03-09T18:46:35.183095+0000 mon.c (mon.1) 254 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:46:36.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:36 vm08 bash[46122]: audit 2026-03-09T18:46:35.948386+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:36.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:36 vm08 bash[46122]: audit 2026-03-09T18:46:35.948386+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:36.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:36 vm08 bash[46122]: audit 2026-03-09T18:46:35.956690+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:36.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:36 vm08 bash[46122]: audit 2026-03-09T18:46:35.956690+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:37.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:36 vm00 bash[65531]: cluster 2026-03-09T18:46:35.146840+0000 mon.a (mon.0) 340 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:46:37.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:36 vm00 bash[65531]: cluster 2026-03-09T18:46:35.146840+0000 mon.a (mon.0) 340 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:46:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:36 vm00 bash[65531]: cluster 2026-03-09T18:46:35.171644+0000 mon.a (mon.0) 341 : cluster [INF] osd.1 [v2:192.168.123.100:6810/3556877086,v1:192.168.123.100:6811/3556877086] boot 2026-03-09T18:46:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:36 vm00 bash[65531]: cluster 2026-03-09T18:46:35.171644+0000 mon.a (mon.0) 341 : cluster [INF] osd.1 [v2:192.168.123.100:6810/3556877086,v1:192.168.123.100:6811/3556877086] boot 2026-03-09T18:46:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:36 vm00 bash[65531]: cluster 2026-03-09T18:46:35.171656+0000 mon.a (mon.0) 342 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-09T18:46:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:36 vm00 bash[65531]: cluster 2026-03-09T18:46:35.171656+0000 mon.a (mon.0) 342 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-09T18:46:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:36 vm00 bash[65531]: audit 2026-03-09T18:46:35.183095+0000 mon.c (mon.1) 254 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:46:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:36 vm00 bash[65531]: audit 2026-03-09T18:46:35.183095+0000 mon.c (mon.1) 254 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:46:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:36 vm00 bash[65531]: audit 2026-03-09T18:46:35.948386+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:36 vm00 bash[65531]: audit 2026-03-09T18:46:35.948386+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:36 vm00 bash[65531]: audit 2026-03-09T18:46:35.956690+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:37.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:36 vm00 bash[65531]: audit 2026-03-09T18:46:35.956690+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:36 vm00 bash[69512]: cluster 2026-03-09T18:46:35.146840+0000 mon.a (mon.0) 340 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:46:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:36 vm00 bash[69512]: cluster 2026-03-09T18:46:35.146840+0000 mon.a (mon.0) 340 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:46:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:36 vm00 bash[69512]: cluster 2026-03-09T18:46:35.171644+0000 mon.a (mon.0) 341 : cluster [INF] osd.1 [v2:192.168.123.100:6810/3556877086,v1:192.168.123.100:6811/3556877086] boot 2026-03-09T18:46:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:36 vm00 bash[69512]: cluster 2026-03-09T18:46:35.171644+0000 mon.a (mon.0) 341 : cluster [INF] osd.1 [v2:192.168.123.100:6810/3556877086,v1:192.168.123.100:6811/3556877086] boot 2026-03-09T18:46:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:36 vm00 bash[69512]: cluster 2026-03-09T18:46:35.171656+0000 mon.a (mon.0) 342 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-09T18:46:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:36 vm00 bash[69512]: cluster 2026-03-09T18:46:35.171656+0000 mon.a (mon.0) 342 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-09T18:46:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:36 vm00 bash[69512]: audit 2026-03-09T18:46:35.183095+0000 mon.c (mon.1) 254 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:46:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:36 vm00 bash[69512]: audit 2026-03-09T18:46:35.183095+0000 mon.c (mon.1) 254 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:46:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:36 vm00 bash[69512]: audit 2026-03-09T18:46:35.948386+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:36 vm00 bash[69512]: audit 2026-03-09T18:46:35.948386+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:36 vm00 bash[69512]: audit 2026-03-09T18:46:35.956690+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:37.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:36 vm00 bash[69512]: audit 2026-03-09T18:46:35.956690+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:37.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:37 vm08 bash[46122]: cluster 2026-03-09T18:46:36.084360+0000 mgr.y (mgr.44107) 239 : cluster [DBG] pgmap v106: 161 pgs: 10 peering, 34 active+undersized, 19 active+undersized+degraded, 98 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 72/627 objects degraded (11.483%) 2026-03-09T18:46:37.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:37 vm08 bash[46122]: cluster 2026-03-09T18:46:36.084360+0000 mgr.y (mgr.44107) 239 : cluster [DBG] pgmap v106: 161 pgs: 10 peering, 34 active+undersized, 19 active+undersized+degraded, 98 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 72/627 objects degraded (11.483%) 2026-03-09T18:46:37.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:37 vm08 bash[46122]: cluster 2026-03-09T18:46:36.350643+0000 mon.a (mon.0) 345 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T18:46:37.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:37 vm08 bash[46122]: cluster 2026-03-09T18:46:36.350643+0000 mon.a (mon.0) 345 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T18:46:37.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:37 vm08 bash[46122]: audit 2026-03-09T18:46:36.885387+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:37.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:37 vm08 bash[46122]: audit 2026-03-09T18:46:36.885387+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:37.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:37 vm08 bash[46122]: cluster 2026-03-09T18:46:36.988561+0000 mon.a (mon.0) 347 : cluster [WRN] Health check update: Degraded data redundancy: 72/627 objects degraded (11.483%), 19 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:37.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:37 vm08 bash[46122]: cluster 2026-03-09T18:46:36.988561+0000 mon.a (mon.0) 347 : cluster [WRN] Health check update: Degraded data redundancy: 72/627 objects degraded (11.483%), 19 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:37.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:37 vm08 bash[46122]: cluster 2026-03-09T18:46:36.988578+0000 mon.a (mon.0) 348 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 8 pgs inactive, 9 pgs peering) 2026-03-09T18:46:37.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:37 vm08 bash[46122]: cluster 2026-03-09T18:46:36.988578+0000 mon.a (mon.0) 348 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 8 pgs inactive, 9 pgs peering) 2026-03-09T18:46:37.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:37 vm08 bash[46122]: audit 2026-03-09T18:46:37.094215+0000 mon.a (mon.0) 349 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:37.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:37 vm08 bash[46122]: audit 2026-03-09T18:46:37.094215+0000 mon.a (mon.0) 349 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:38.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:37 vm00 bash[65531]: cluster 2026-03-09T18:46:36.084360+0000 mgr.y (mgr.44107) 239 : cluster [DBG] pgmap v106: 161 pgs: 10 peering, 34 active+undersized, 19 active+undersized+degraded, 98 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 72/627 objects degraded (11.483%) 2026-03-09T18:46:38.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:37 vm00 bash[65531]: cluster 2026-03-09T18:46:36.084360+0000 mgr.y (mgr.44107) 239 : cluster [DBG] pgmap v106: 161 pgs: 10 peering, 34 active+undersized, 19 active+undersized+degraded, 98 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 72/627 objects degraded (11.483%) 2026-03-09T18:46:38.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:37 vm00 bash[65531]: cluster 2026-03-09T18:46:36.350643+0000 mon.a (mon.0) 345 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T18:46:38.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:37 vm00 bash[65531]: cluster 2026-03-09T18:46:36.350643+0000 mon.a (mon.0) 345 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T18:46:38.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:37 vm00 bash[65531]: audit 2026-03-09T18:46:36.885387+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:38.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:37 vm00 bash[65531]: audit 2026-03-09T18:46:36.885387+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:38.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:37 vm00 bash[65531]: cluster 2026-03-09T18:46:36.988561+0000 mon.a (mon.0) 347 : cluster [WRN] Health check update: Degraded data redundancy: 72/627 objects degraded (11.483%), 19 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:38.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:37 vm00 bash[65531]: cluster 2026-03-09T18:46:36.988561+0000 mon.a (mon.0) 347 : cluster [WRN] Health check update: Degraded data redundancy: 72/627 objects degraded (11.483%), 19 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:38.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:37 vm00 bash[65531]: cluster 2026-03-09T18:46:36.988578+0000 mon.a (mon.0) 348 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 8 pgs inactive, 9 pgs peering) 2026-03-09T18:46:38.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:37 vm00 bash[65531]: cluster 2026-03-09T18:46:36.988578+0000 mon.a (mon.0) 348 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 8 pgs inactive, 9 pgs peering) 2026-03-09T18:46:38.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:37 vm00 bash[65531]: audit 2026-03-09T18:46:37.094215+0000 mon.a (mon.0) 349 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:38.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:37 vm00 bash[65531]: audit 2026-03-09T18:46:37.094215+0000 mon.a (mon.0) 349 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:38.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:37 vm00 bash[69512]: cluster 2026-03-09T18:46:36.084360+0000 mgr.y (mgr.44107) 239 : cluster [DBG] pgmap v106: 161 pgs: 10 peering, 34 active+undersized, 19 active+undersized+degraded, 98 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 72/627 objects degraded (11.483%) 2026-03-09T18:46:38.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:37 vm00 bash[69512]: cluster 2026-03-09T18:46:36.084360+0000 mgr.y (mgr.44107) 239 : cluster [DBG] pgmap v106: 161 pgs: 10 peering, 34 active+undersized, 19 active+undersized+degraded, 98 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 72/627 objects degraded (11.483%) 2026-03-09T18:46:38.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:37 vm00 bash[69512]: cluster 2026-03-09T18:46:36.350643+0000 mon.a (mon.0) 345 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T18:46:38.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:37 vm00 bash[69512]: cluster 2026-03-09T18:46:36.350643+0000 mon.a (mon.0) 345 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T18:46:38.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:37 vm00 bash[69512]: audit 2026-03-09T18:46:36.885387+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:38.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:37 vm00 bash[69512]: audit 2026-03-09T18:46:36.885387+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:38.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:37 vm00 bash[69512]: cluster 2026-03-09T18:46:36.988561+0000 mon.a (mon.0) 347 : cluster [WRN] Health check update: Degraded data redundancy: 72/627 objects degraded (11.483%), 19 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:38.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:37 vm00 bash[69512]: cluster 2026-03-09T18:46:36.988561+0000 mon.a (mon.0) 347 : cluster [WRN] Health check update: Degraded data redundancy: 72/627 objects degraded (11.483%), 19 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:38.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:37 vm00 bash[69512]: cluster 2026-03-09T18:46:36.988578+0000 mon.a (mon.0) 348 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 8 pgs inactive, 9 pgs peering) 2026-03-09T18:46:38.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:37 vm00 bash[69512]: cluster 2026-03-09T18:46:36.988578+0000 mon.a (mon.0) 348 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 8 pgs inactive, 9 pgs peering) 2026-03-09T18:46:38.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:37 vm00 bash[69512]: audit 2026-03-09T18:46:37.094215+0000 mon.a (mon.0) 349 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:38.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:37 vm00 bash[69512]: audit 2026-03-09T18:46:37.094215+0000 mon.a (mon.0) 349 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:39.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:38 vm08 bash[46122]: cluster 2026-03-09T18:46:38.084700+0000 mgr.y (mgr.44107) 240 : cluster [DBG] pgmap v108: 161 pgs: 10 peering, 31 active+undersized, 18 active+undersized+degraded, 102 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s; 71/627 objects degraded (11.324%) 2026-03-09T18:46:39.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:38 vm08 bash[46122]: cluster 2026-03-09T18:46:38.084700+0000 mgr.y (mgr.44107) 240 : cluster [DBG] pgmap v108: 161 pgs: 10 peering, 31 active+undersized, 18 active+undersized+degraded, 102 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s; 71/627 objects degraded (11.324%) 2026-03-09T18:46:39.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:38 vm00 bash[65531]: cluster 2026-03-09T18:46:38.084700+0000 mgr.y (mgr.44107) 240 : cluster [DBG] pgmap v108: 161 pgs: 10 peering, 31 active+undersized, 18 active+undersized+degraded, 102 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s; 71/627 objects degraded (11.324%) 2026-03-09T18:46:39.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:38 vm00 bash[65531]: cluster 2026-03-09T18:46:38.084700+0000 mgr.y (mgr.44107) 240 : cluster [DBG] pgmap v108: 161 pgs: 10 peering, 31 active+undersized, 18 active+undersized+degraded, 102 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s; 71/627 objects degraded (11.324%) 2026-03-09T18:46:39.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:38 vm00 bash[69512]: cluster 2026-03-09T18:46:38.084700+0000 mgr.y (mgr.44107) 240 : cluster [DBG] pgmap v108: 161 pgs: 10 peering, 31 active+undersized, 18 active+undersized+degraded, 102 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s; 71/627 objects degraded (11.324%) 2026-03-09T18:46:39.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:38 vm00 bash[69512]: cluster 2026-03-09T18:46:38.084700+0000 mgr.y (mgr.44107) 240 : cluster [DBG] pgmap v108: 161 pgs: 10 peering, 31 active+undersized, 18 active+undersized+degraded, 102 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s; 71/627 objects degraded (11.324%) 2026-03-09T18:46:39.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:46:39 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:46:39] "GET /metrics HTTP/1.1" 200 37829 "" "Prometheus/2.51.0" 2026-03-09T18:46:40.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:40 vm08 bash[46122]: cluster 2026-03-09T18:46:40.135992+0000 mon.a (mon.0) 350 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 71/627 objects degraded (11.324%), 18 pgs degraded) 2026-03-09T18:46:40.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:40 vm08 bash[46122]: cluster 2026-03-09T18:46:40.135992+0000 mon.a (mon.0) 350 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 71/627 objects degraded (11.324%), 18 pgs degraded) 2026-03-09T18:46:40.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:40 vm08 bash[46122]: cluster 2026-03-09T18:46:40.136029+0000 mon.a (mon.0) 351 : cluster [INF] Cluster is now healthy 2026-03-09T18:46:40.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:40 vm08 bash[46122]: cluster 2026-03-09T18:46:40.136029+0000 mon.a (mon.0) 351 : cluster [INF] Cluster is now healthy 2026-03-09T18:46:40.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:40 vm00 bash[65531]: cluster 2026-03-09T18:46:40.135992+0000 mon.a (mon.0) 350 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 71/627 objects degraded (11.324%), 18 pgs degraded) 2026-03-09T18:46:40.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:40 vm00 bash[65531]: cluster 2026-03-09T18:46:40.135992+0000 mon.a (mon.0) 350 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 71/627 objects degraded (11.324%), 18 pgs degraded) 2026-03-09T18:46:40.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:40 vm00 bash[65531]: cluster 2026-03-09T18:46:40.136029+0000 mon.a (mon.0) 351 : cluster [INF] Cluster is now healthy 2026-03-09T18:46:40.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:40 vm00 bash[65531]: cluster 2026-03-09T18:46:40.136029+0000 mon.a (mon.0) 351 : cluster [INF] Cluster is now healthy 2026-03-09T18:46:40.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:40 vm00 bash[69512]: cluster 2026-03-09T18:46:40.135992+0000 mon.a (mon.0) 350 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 71/627 objects degraded (11.324%), 18 pgs degraded) 2026-03-09T18:46:40.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:40 vm00 bash[69512]: cluster 2026-03-09T18:46:40.135992+0000 mon.a (mon.0) 350 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 71/627 objects degraded (11.324%), 18 pgs degraded) 2026-03-09T18:46:40.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:40 vm00 bash[69512]: cluster 2026-03-09T18:46:40.136029+0000 mon.a (mon.0) 351 : cluster [INF] Cluster is now healthy 2026-03-09T18:46:40.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:40 vm00 bash[69512]: cluster 2026-03-09T18:46:40.136029+0000 mon.a (mon.0) 351 : cluster [INF] Cluster is now healthy 2026-03-09T18:46:41.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:41 vm08 bash[46122]: cluster 2026-03-09T18:46:40.085201+0000 mgr.y (mgr.44107) 241 : cluster [DBG] pgmap v109: 161 pgs: 10 peering, 151 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:41.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:41 vm08 bash[46122]: cluster 2026-03-09T18:46:40.085201+0000 mgr.y (mgr.44107) 241 : cluster [DBG] pgmap v109: 161 pgs: 10 peering, 151 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:41.567 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:41 vm00 bash[65531]: cluster 2026-03-09T18:46:40.085201+0000 mgr.y (mgr.44107) 241 : cluster [DBG] pgmap v109: 161 pgs: 10 peering, 151 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:41.567 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:41 vm00 bash[65531]: cluster 2026-03-09T18:46:40.085201+0000 mgr.y (mgr.44107) 241 : cluster [DBG] pgmap v109: 161 pgs: 10 peering, 151 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:41.567 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:41 vm00 bash[69512]: cluster 2026-03-09T18:46:40.085201+0000 mgr.y (mgr.44107) 241 : cluster [DBG] pgmap v109: 161 pgs: 10 peering, 151 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:41.567 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:41 vm00 bash[69512]: cluster 2026-03-09T18:46:40.085201+0000 mgr.y (mgr.44107) 241 : cluster [DBG] pgmap v109: 161 pgs: 10 peering, 151 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:43 vm00 bash[65531]: audit 2026-03-09T18:46:41.570928+0000 mgr.y (mgr.44107) 242 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:43 vm00 bash[65531]: audit 2026-03-09T18:46:41.570928+0000 mgr.y (mgr.44107) 242 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:43 vm00 bash[65531]: cluster 2026-03-09T18:46:42.085651+0000 mgr.y (mgr.44107) 243 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:43 vm00 bash[65531]: cluster 2026-03-09T18:46:42.085651+0000 mgr.y (mgr.44107) 243 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:43 vm00 bash[65531]: audit 2026-03-09T18:46:42.754201+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:43 vm00 bash[65531]: audit 2026-03-09T18:46:42.754201+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:43 vm00 bash[65531]: audit 2026-03-09T18:46:42.761426+0000 mon.a (mon.0) 353 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:43 vm00 bash[65531]: audit 2026-03-09T18:46:42.761426+0000 mon.a (mon.0) 353 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:43 vm00 bash[65531]: audit 2026-03-09T18:46:42.763028+0000 mon.c (mon.1) 255 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:43 vm00 bash[65531]: audit 2026-03-09T18:46:42.763028+0000 mon.c (mon.1) 255 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:43 vm00 bash[65531]: audit 2026-03-09T18:46:42.763877+0000 mon.c (mon.1) 256 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:43 vm00 bash[65531]: audit 2026-03-09T18:46:42.763877+0000 mon.c (mon.1) 256 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:43 vm00 bash[65531]: audit 2026-03-09T18:46:42.768226+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:43 vm00 bash[65531]: audit 2026-03-09T18:46:42.768226+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:43 vm00 bash[65531]: audit 2026-03-09T18:46:42.808154+0000 mon.c (mon.1) 257 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:43 vm00 bash[65531]: audit 2026-03-09T18:46:42.808154+0000 mon.c (mon.1) 257 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:43 vm00 bash[65531]: audit 2026-03-09T18:46:42.809858+0000 mon.c (mon.1) 258 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:43 vm00 bash[65531]: audit 2026-03-09T18:46:42.809858+0000 mon.c (mon.1) 258 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:43 vm00 bash[65531]: audit 2026-03-09T18:46:42.811143+0000 mon.c (mon.1) 259 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:43 vm00 bash[65531]: audit 2026-03-09T18:46:42.811143+0000 mon.c (mon.1) 259 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:43 vm00 bash[65531]: audit 2026-03-09T18:46:42.812239+0000 mon.c (mon.1) 260 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:43 vm00 bash[65531]: audit 2026-03-09T18:46:42.812239+0000 mon.c (mon.1) 260 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:43 vm00 bash[65531]: audit 2026-03-09T18:46:42.813501+0000 mon.c (mon.1) 261 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:43 vm00 bash[65531]: audit 2026-03-09T18:46:42.813501+0000 mon.c (mon.1) 261 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:43 vm00 bash[69512]: audit 2026-03-09T18:46:41.570928+0000 mgr.y (mgr.44107) 242 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:43 vm00 bash[69512]: audit 2026-03-09T18:46:41.570928+0000 mgr.y (mgr.44107) 242 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:43 vm00 bash[69512]: cluster 2026-03-09T18:46:42.085651+0000 mgr.y (mgr.44107) 243 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:43 vm00 bash[69512]: cluster 2026-03-09T18:46:42.085651+0000 mgr.y (mgr.44107) 243 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:43 vm00 bash[69512]: audit 2026-03-09T18:46:42.754201+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:43 vm00 bash[69512]: audit 2026-03-09T18:46:42.754201+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:43 vm00 bash[69512]: audit 2026-03-09T18:46:42.761426+0000 mon.a (mon.0) 353 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:43 vm00 bash[69512]: audit 2026-03-09T18:46:42.761426+0000 mon.a (mon.0) 353 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:43 vm00 bash[69512]: audit 2026-03-09T18:46:42.763028+0000 mon.c (mon.1) 255 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:43 vm00 bash[69512]: audit 2026-03-09T18:46:42.763028+0000 mon.c (mon.1) 255 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:43.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:43 vm00 bash[69512]: audit 2026-03-09T18:46:42.763877+0000 mon.c (mon.1) 256 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:43 vm00 bash[69512]: audit 2026-03-09T18:46:42.763877+0000 mon.c (mon.1) 256 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:43 vm00 bash[69512]: audit 2026-03-09T18:46:42.768226+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:43 vm00 bash[69512]: audit 2026-03-09T18:46:42.768226+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:43 vm00 bash[69512]: audit 2026-03-09T18:46:42.808154+0000 mon.c (mon.1) 257 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:43 vm00 bash[69512]: audit 2026-03-09T18:46:42.808154+0000 mon.c (mon.1) 257 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:43 vm00 bash[69512]: audit 2026-03-09T18:46:42.809858+0000 mon.c (mon.1) 258 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:43 vm00 bash[69512]: audit 2026-03-09T18:46:42.809858+0000 mon.c (mon.1) 258 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:43 vm00 bash[69512]: audit 2026-03-09T18:46:42.811143+0000 mon.c (mon.1) 259 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:43 vm00 bash[69512]: audit 2026-03-09T18:46:42.811143+0000 mon.c (mon.1) 259 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:43 vm00 bash[69512]: audit 2026-03-09T18:46:42.812239+0000 mon.c (mon.1) 260 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:43 vm00 bash[69512]: audit 2026-03-09T18:46:42.812239+0000 mon.c (mon.1) 260 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:43 vm00 bash[69512]: audit 2026-03-09T18:46:42.813501+0000 mon.c (mon.1) 261 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T18:46:43.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:43 vm00 bash[69512]: audit 2026-03-09T18:46:42.813501+0000 mon.c (mon.1) 261 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T18:46:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:43 vm08 bash[46122]: audit 2026-03-09T18:46:41.570928+0000 mgr.y (mgr.44107) 242 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:43 vm08 bash[46122]: audit 2026-03-09T18:46:41.570928+0000 mgr.y (mgr.44107) 242 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:43 vm08 bash[46122]: cluster 2026-03-09T18:46:42.085651+0000 mgr.y (mgr.44107) 243 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:46:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:43 vm08 bash[46122]: cluster 2026-03-09T18:46:42.085651+0000 mgr.y (mgr.44107) 243 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T18:46:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:43 vm08 bash[46122]: audit 2026-03-09T18:46:42.754201+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:43 vm08 bash[46122]: audit 2026-03-09T18:46:42.754201+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:43 vm08 bash[46122]: audit 2026-03-09T18:46:42.761426+0000 mon.a (mon.0) 353 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:43 vm08 bash[46122]: audit 2026-03-09T18:46:42.761426+0000 mon.a (mon.0) 353 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:43 vm08 bash[46122]: audit 2026-03-09T18:46:42.763028+0000 mon.c (mon.1) 255 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:43 vm08 bash[46122]: audit 2026-03-09T18:46:42.763028+0000 mon.c (mon.1) 255 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:43 vm08 bash[46122]: audit 2026-03-09T18:46:42.763877+0000 mon.c (mon.1) 256 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:43 vm08 bash[46122]: audit 2026-03-09T18:46:42.763877+0000 mon.c (mon.1) 256 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:43 vm08 bash[46122]: audit 2026-03-09T18:46:42.768226+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:43 vm08 bash[46122]: audit 2026-03-09T18:46:42.768226+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:43 vm08 bash[46122]: audit 2026-03-09T18:46:42.808154+0000 mon.c (mon.1) 257 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:43 vm08 bash[46122]: audit 2026-03-09T18:46:42.808154+0000 mon.c (mon.1) 257 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:43 vm08 bash[46122]: audit 2026-03-09T18:46:42.809858+0000 mon.c (mon.1) 258 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:43 vm08 bash[46122]: audit 2026-03-09T18:46:42.809858+0000 mon.c (mon.1) 258 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:43 vm08 bash[46122]: audit 2026-03-09T18:46:42.811143+0000 mon.c (mon.1) 259 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:43 vm08 bash[46122]: audit 2026-03-09T18:46:42.811143+0000 mon.c (mon.1) 259 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:43 vm08 bash[46122]: audit 2026-03-09T18:46:42.812239+0000 mon.c (mon.1) 260 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:43 vm08 bash[46122]: audit 2026-03-09T18:46:42.812239+0000 mon.c (mon.1) 260 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:43 vm08 bash[46122]: audit 2026-03-09T18:46:42.813501+0000 mon.c (mon.1) 261 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T18:46:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:43 vm08 bash[46122]: audit 2026-03-09T18:46:42.813501+0000 mon.c (mon.1) 261 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T18:46:44.498 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:46:44 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:44.498 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:46:44 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:44.498 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:46:44 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:44.498 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:46:44 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:44.498 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:46:44 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:44.498 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:46:44 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:44.498 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:46:44 vm08 systemd[1]: Stopping Ceph osd.4 for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:46:44.498 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:44 vm08 bash[46122]: audit 2026-03-09T18:46:42.813653+0000 mgr.y (mgr.44107) 244 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T18:46:44.498 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:44 vm08 bash[46122]: audit 2026-03-09T18:46:42.813653+0000 mgr.y (mgr.44107) 244 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T18:46:44.498 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:44 vm08 bash[46122]: cephadm 2026-03-09T18:46:42.814465+0000 mgr.y (mgr.44107) 245 : cephadm [INF] Upgrade: osd.4 is safe to restart 2026-03-09T18:46:44.498 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:44 vm08 bash[46122]: cephadm 2026-03-09T18:46:42.814465+0000 mgr.y (mgr.44107) 245 : cephadm [INF] Upgrade: osd.4 is safe to restart 2026-03-09T18:46:44.498 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:44 vm08 bash[46122]: audit 2026-03-09T18:46:43.699054+0000 mon.a (mon.0) 355 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:44.498 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:44 vm08 bash[46122]: audit 2026-03-09T18:46:43.699054+0000 mon.a (mon.0) 355 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:44.498 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:44 vm08 bash[46122]: audit 2026-03-09T18:46:43.703536+0000 mon.c (mon.1) 262 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T18:46:44.498 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:44 vm08 bash[46122]: audit 2026-03-09T18:46:43.703536+0000 mon.c (mon.1) 262 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T18:46:44.498 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:44 vm08 bash[46122]: audit 2026-03-09T18:46:43.704031+0000 mon.c (mon.1) 263 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:44.498 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:44 vm08 bash[46122]: audit 2026-03-09T18:46:43.704031+0000 mon.c (mon.1) 263 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:44.498 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:44 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:44.498 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:46:44 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:44.498 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:46:44 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:44.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:44 vm00 bash[65531]: audit 2026-03-09T18:46:42.813653+0000 mgr.y (mgr.44107) 244 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T18:46:44.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:44 vm00 bash[65531]: audit 2026-03-09T18:46:42.813653+0000 mgr.y (mgr.44107) 244 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T18:46:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:44 vm00 bash[65531]: cephadm 2026-03-09T18:46:42.814465+0000 mgr.y (mgr.44107) 245 : cephadm [INF] Upgrade: osd.4 is safe to restart 2026-03-09T18:46:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:44 vm00 bash[65531]: cephadm 2026-03-09T18:46:42.814465+0000 mgr.y (mgr.44107) 245 : cephadm [INF] Upgrade: osd.4 is safe to restart 2026-03-09T18:46:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:44 vm00 bash[65531]: audit 2026-03-09T18:46:43.699054+0000 mon.a (mon.0) 355 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:44 vm00 bash[65531]: audit 2026-03-09T18:46:43.699054+0000 mon.a (mon.0) 355 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:44 vm00 bash[65531]: audit 2026-03-09T18:46:43.703536+0000 mon.c (mon.1) 262 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T18:46:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:44 vm00 bash[65531]: audit 2026-03-09T18:46:43.703536+0000 mon.c (mon.1) 262 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T18:46:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:44 vm00 bash[65531]: audit 2026-03-09T18:46:43.704031+0000 mon.c (mon.1) 263 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:44.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:44 vm00 bash[65531]: audit 2026-03-09T18:46:43.704031+0000 mon.c (mon.1) 263 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:44.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:44 vm00 bash[69512]: audit 2026-03-09T18:46:42.813653+0000 mgr.y (mgr.44107) 244 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T18:46:44.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:44 vm00 bash[69512]: audit 2026-03-09T18:46:42.813653+0000 mgr.y (mgr.44107) 244 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T18:46:44.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:44 vm00 bash[69512]: cephadm 2026-03-09T18:46:42.814465+0000 mgr.y (mgr.44107) 245 : cephadm [INF] Upgrade: osd.4 is safe to restart 2026-03-09T18:46:44.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:44 vm00 bash[69512]: cephadm 2026-03-09T18:46:42.814465+0000 mgr.y (mgr.44107) 245 : cephadm [INF] Upgrade: osd.4 is safe to restart 2026-03-09T18:46:44.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:44 vm00 bash[69512]: audit 2026-03-09T18:46:43.699054+0000 mon.a (mon.0) 355 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:44.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:44 vm00 bash[69512]: audit 2026-03-09T18:46:43.699054+0000 mon.a (mon.0) 355 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:44.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:44 vm00 bash[69512]: audit 2026-03-09T18:46:43.703536+0000 mon.c (mon.1) 262 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T18:46:44.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:44 vm00 bash[69512]: audit 2026-03-09T18:46:43.703536+0000 mon.c (mon.1) 262 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T18:46:44.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:44 vm00 bash[69512]: audit 2026-03-09T18:46:43.704031+0000 mon.c (mon.1) 263 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:44.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:44 vm00 bash[69512]: audit 2026-03-09T18:46:43.704031+0000 mon.c (mon.1) 263 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:44.974 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:46:44 vm08 bash[20830]: debug 2026-03-09T18:46:44.527+0000 7fd6dc700700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.4 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T18:46:44.974 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:46:44 vm08 bash[20830]: debug 2026-03-09T18:46:44.527+0000 7fd6dc700700 -1 osd.4 121 *** Got signal Terminated *** 2026-03-09T18:46:44.974 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:46:44 vm08 bash[20830]: debug 2026-03-09T18:46:44.527+0000 7fd6dc700700 -1 osd.4 121 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:46:45.357 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:45 vm08 bash[46122]: cephadm 2026-03-09T18:46:43.694284+0000 mgr.y (mgr.44107) 246 : cephadm [INF] Upgrade: Updating osd.4 2026-03-09T18:46:45.357 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:45 vm08 bash[46122]: cephadm 2026-03-09T18:46:43.694284+0000 mgr.y (mgr.44107) 246 : cephadm [INF] Upgrade: Updating osd.4 2026-03-09T18:46:45.357 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:45 vm08 bash[46122]: cephadm 2026-03-09T18:46:43.705637+0000 mgr.y (mgr.44107) 247 : cephadm [INF] Deploying daemon osd.4 on vm08 2026-03-09T18:46:45.357 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:45 vm08 bash[46122]: cephadm 2026-03-09T18:46:43.705637+0000 mgr.y (mgr.44107) 247 : cephadm [INF] Deploying daemon osd.4 on vm08 2026-03-09T18:46:45.357 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:45 vm08 bash[46122]: cluster 2026-03-09T18:46:44.085905+0000 mgr.y (mgr.44107) 248 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:46:45.357 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:45 vm08 bash[46122]: cluster 2026-03-09T18:46:44.085905+0000 mgr.y (mgr.44107) 248 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:46:45.357 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:45 vm08 bash[46122]: cluster 2026-03-09T18:46:44.535708+0000 mon.a (mon.0) 356 : cluster [INF] osd.4 marked itself down and dead 2026-03-09T18:46:45.357 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:45 vm08 bash[46122]: cluster 2026-03-09T18:46:44.535708+0000 mon.a (mon.0) 356 : cluster [INF] osd.4 marked itself down and dead 2026-03-09T18:46:45.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:45 vm00 bash[65531]: cephadm 2026-03-09T18:46:43.694284+0000 mgr.y (mgr.44107) 246 : cephadm [INF] Upgrade: Updating osd.4 2026-03-09T18:46:45.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:45 vm00 bash[65531]: cephadm 2026-03-09T18:46:43.694284+0000 mgr.y (mgr.44107) 246 : cephadm [INF] Upgrade: Updating osd.4 2026-03-09T18:46:45.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:45 vm00 bash[65531]: cephadm 2026-03-09T18:46:43.705637+0000 mgr.y (mgr.44107) 247 : cephadm [INF] Deploying daemon osd.4 on vm08 2026-03-09T18:46:45.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:45 vm00 bash[65531]: cephadm 2026-03-09T18:46:43.705637+0000 mgr.y (mgr.44107) 247 : cephadm [INF] Deploying daemon osd.4 on vm08 2026-03-09T18:46:45.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:45 vm00 bash[65531]: cluster 2026-03-09T18:46:44.085905+0000 mgr.y (mgr.44107) 248 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:46:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:45 vm00 bash[65531]: cluster 2026-03-09T18:46:44.085905+0000 mgr.y (mgr.44107) 248 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:46:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:45 vm00 bash[65531]: cluster 2026-03-09T18:46:44.535708+0000 mon.a (mon.0) 356 : cluster [INF] osd.4 marked itself down and dead 2026-03-09T18:46:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:45 vm00 bash[65531]: cluster 2026-03-09T18:46:44.535708+0000 mon.a (mon.0) 356 : cluster [INF] osd.4 marked itself down and dead 2026-03-09T18:46:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:45 vm00 bash[69512]: cephadm 2026-03-09T18:46:43.694284+0000 mgr.y (mgr.44107) 246 : cephadm [INF] Upgrade: Updating osd.4 2026-03-09T18:46:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:45 vm00 bash[69512]: cephadm 2026-03-09T18:46:43.694284+0000 mgr.y (mgr.44107) 246 : cephadm [INF] Upgrade: Updating osd.4 2026-03-09T18:46:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:45 vm00 bash[69512]: cephadm 2026-03-09T18:46:43.705637+0000 mgr.y (mgr.44107) 247 : cephadm [INF] Deploying daemon osd.4 on vm08 2026-03-09T18:46:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:45 vm00 bash[69512]: cephadm 2026-03-09T18:46:43.705637+0000 mgr.y (mgr.44107) 247 : cephadm [INF] Deploying daemon osd.4 on vm08 2026-03-09T18:46:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:45 vm00 bash[69512]: cluster 2026-03-09T18:46:44.085905+0000 mgr.y (mgr.44107) 248 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:46:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:45 vm00 bash[69512]: cluster 2026-03-09T18:46:44.085905+0000 mgr.y (mgr.44107) 248 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:46:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:45 vm00 bash[69512]: cluster 2026-03-09T18:46:44.535708+0000 mon.a (mon.0) 356 : cluster [INF] osd.4 marked itself down and dead 2026-03-09T18:46:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:45 vm00 bash[69512]: cluster 2026-03-09T18:46:44.535708+0000 mon.a (mon.0) 356 : cluster [INF] osd.4 marked itself down and dead 2026-03-09T18:46:45.636 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:46:45 vm08 bash[53464]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-osd-4 2026-03-09T18:46:45.890 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:45 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:45.891 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:46:45 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:45.891 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:46:45 vm08 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.4.service: Deactivated successfully. 2026-03-09T18:46:45.891 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:46:45 vm08 systemd[1]: Stopped Ceph osd.4 for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:46:45.891 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:46:45 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:45.891 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:46:45 vm08 systemd[1]: Started Ceph osd.4 for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:46:45.891 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:46:45 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:45.891 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:46:45 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:45.891 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:46:45 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:45.891 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:46:45 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:45.891 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:46:45 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:45.891 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:46:45 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:46:46.224 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:46:46 vm08 bash[53677]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:46:46.224 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:46:46 vm08 bash[53677]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:46:46.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:46 vm00 bash[65531]: cluster 2026-03-09T18:46:45.276246+0000 mon.a (mon.0) 357 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:46:46.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:46 vm00 bash[65531]: cluster 2026-03-09T18:46:45.276246+0000 mon.a (mon.0) 357 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:46:46.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:46 vm00 bash[65531]: cluster 2026-03-09T18:46:45.314904+0000 mon.a (mon.0) 358 : cluster [DBG] osdmap e122: 8 total, 7 up, 8 in 2026-03-09T18:46:46.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:46 vm00 bash[65531]: cluster 2026-03-09T18:46:45.314904+0000 mon.a (mon.0) 358 : cluster [DBG] osdmap e122: 8 total, 7 up, 8 in 2026-03-09T18:46:46.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:46 vm00 bash[65531]: audit 2026-03-09T18:46:45.923760+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:46.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:46 vm00 bash[65531]: audit 2026-03-09T18:46:45.923760+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:46.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:46 vm00 bash[65531]: audit 2026-03-09T18:46:45.929795+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:46 vm00 bash[65531]: audit 2026-03-09T18:46:45.929795+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:46 vm00 bash[65531]: audit 2026-03-09T18:46:45.931726+0000 mon.c (mon.1) 264 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:46 vm00 bash[65531]: audit 2026-03-09T18:46:45.931726+0000 mon.c (mon.1) 264 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:46 vm00 bash[69512]: cluster 2026-03-09T18:46:45.276246+0000 mon.a (mon.0) 357 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:46:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:46 vm00 bash[69512]: cluster 2026-03-09T18:46:45.276246+0000 mon.a (mon.0) 357 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:46:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:46 vm00 bash[69512]: cluster 2026-03-09T18:46:45.314904+0000 mon.a (mon.0) 358 : cluster [DBG] osdmap e122: 8 total, 7 up, 8 in 2026-03-09T18:46:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:46 vm00 bash[69512]: cluster 2026-03-09T18:46:45.314904+0000 mon.a (mon.0) 358 : cluster [DBG] osdmap e122: 8 total, 7 up, 8 in 2026-03-09T18:46:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:46 vm00 bash[69512]: audit 2026-03-09T18:46:45.923760+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:46 vm00 bash[69512]: audit 2026-03-09T18:46:45.923760+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:46 vm00 bash[69512]: audit 2026-03-09T18:46:45.929795+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:46 vm00 bash[69512]: audit 2026-03-09T18:46:45.929795+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:46 vm00 bash[69512]: audit 2026-03-09T18:46:45.931726+0000 mon.c (mon.1) 264 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:46 vm00 bash[69512]: audit 2026-03-09T18:46:45.931726+0000 mon.c (mon.1) 264 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:46 vm08 bash[46122]: cluster 2026-03-09T18:46:45.276246+0000 mon.a (mon.0) 357 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:46:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:46 vm08 bash[46122]: cluster 2026-03-09T18:46:45.276246+0000 mon.a (mon.0) 357 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:46:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:46 vm08 bash[46122]: cluster 2026-03-09T18:46:45.314904+0000 mon.a (mon.0) 358 : cluster [DBG] osdmap e122: 8 total, 7 up, 8 in 2026-03-09T18:46:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:46 vm08 bash[46122]: cluster 2026-03-09T18:46:45.314904+0000 mon.a (mon.0) 358 : cluster [DBG] osdmap e122: 8 total, 7 up, 8 in 2026-03-09T18:46:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:46 vm08 bash[46122]: audit 2026-03-09T18:46:45.923760+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:46 vm08 bash[46122]: audit 2026-03-09T18:46:45.923760+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:46 vm08 bash[46122]: audit 2026-03-09T18:46:45.929795+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:46 vm08 bash[46122]: audit 2026-03-09T18:46:45.929795+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:46 vm08 bash[46122]: audit 2026-03-09T18:46:45.931726+0000 mon.c (mon.1) 264 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:46 vm08 bash[46122]: audit 2026-03-09T18:46:45.931726+0000 mon.c (mon.1) 264 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:47.224 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:46:46 vm08 bash[53677]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T18:46:47.224 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:46:46 vm08 bash[53677]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:46:47.224 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:46:46 vm08 bash[53677]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:46:47.224 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:46:47 vm08 bash[53677]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 2026-03-09T18:46:47.224 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:46:47 vm08 bash[53677]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-667efc83-fec5-4867-bcef-958ea7d0a5db/osd-block-28dbafde-327a-4cb7-aaf4-8f0bed8a7a21 --path /var/lib/ceph/osd/ceph-4 --no-mon-config 2026-03-09T18:46:47.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:47 vm00 bash[65531]: cluster 2026-03-09T18:46:46.086189+0000 mgr.y (mgr.44107) 249 : cluster [DBG] pgmap v113: 161 pgs: 23 stale+active+clean, 138 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T18:46:47.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:47 vm00 bash[65531]: cluster 2026-03-09T18:46:46.086189+0000 mgr.y (mgr.44107) 249 : cluster [DBG] pgmap v113: 161 pgs: 23 stale+active+clean, 138 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T18:46:47.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:47 vm00 bash[65531]: cluster 2026-03-09T18:46:46.325672+0000 mon.a (mon.0) 361 : cluster [DBG] osdmap e123: 8 total, 7 up, 8 in 2026-03-09T18:46:47.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:47 vm00 bash[65531]: cluster 2026-03-09T18:46:46.325672+0000 mon.a (mon.0) 361 : cluster [DBG] osdmap e123: 8 total, 7 up, 8 in 2026-03-09T18:46:47.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:47 vm00 bash[69512]: cluster 2026-03-09T18:46:46.086189+0000 mgr.y (mgr.44107) 249 : cluster [DBG] pgmap v113: 161 pgs: 23 stale+active+clean, 138 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T18:46:47.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:47 vm00 bash[69512]: cluster 2026-03-09T18:46:46.086189+0000 mgr.y (mgr.44107) 249 : cluster [DBG] pgmap v113: 161 pgs: 23 stale+active+clean, 138 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T18:46:47.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:47 vm00 bash[69512]: cluster 2026-03-09T18:46:46.325672+0000 mon.a (mon.0) 361 : cluster [DBG] osdmap e123: 8 total, 7 up, 8 in 2026-03-09T18:46:47.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:47 vm00 bash[69512]: cluster 2026-03-09T18:46:46.325672+0000 mon.a (mon.0) 361 : cluster [DBG] osdmap e123: 8 total, 7 up, 8 in 2026-03-09T18:46:47.724 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:46:47 vm08 bash[53677]: Running command: /usr/bin/ln -snf /dev/ceph-667efc83-fec5-4867-bcef-958ea7d0a5db/osd-block-28dbafde-327a-4cb7-aaf4-8f0bed8a7a21 /var/lib/ceph/osd/ceph-4/block 2026-03-09T18:46:47.724 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:46:47 vm08 bash[53677]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-4/block 2026-03-09T18:46:47.724 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:46:47 vm08 bash[53677]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 2026-03-09T18:46:47.724 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:46:47 vm08 bash[53677]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 2026-03-09T18:46:47.724 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:46:47 vm08 bash[53677]: --> ceph-volume lvm activate successful for osd ID: 4 2026-03-09T18:46:47.724 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:46:47 vm08 bash[54020]: debug 2026-03-09T18:46:47.475+0000 7f62b431e640 1 -- 192.168.123.108:0/1278066642 <== mon.2 v2:192.168.123.108:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x555efdf3f680 con 0x555efd14d800 2026-03-09T18:46:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:47 vm08 bash[46122]: cluster 2026-03-09T18:46:46.086189+0000 mgr.y (mgr.44107) 249 : cluster [DBG] pgmap v113: 161 pgs: 23 stale+active+clean, 138 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T18:46:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:47 vm08 bash[46122]: cluster 2026-03-09T18:46:46.086189+0000 mgr.y (mgr.44107) 249 : cluster [DBG] pgmap v113: 161 pgs: 23 stale+active+clean, 138 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T18:46:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:47 vm08 bash[46122]: cluster 2026-03-09T18:46:46.325672+0000 mon.a (mon.0) 361 : cluster [DBG] osdmap e123: 8 total, 7 up, 8 in 2026-03-09T18:46:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:47 vm08 bash[46122]: cluster 2026-03-09T18:46:46.325672+0000 mon.a (mon.0) 361 : cluster [DBG] osdmap e123: 8 total, 7 up, 8 in 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.068051+0000 mon.c (mon.1) 265 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.2", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.068051+0000 mon.c (mon.1) 265 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.2", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.068153+0000 mon.c (mon.1) 266 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.1b", "id": [7, 2]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.068153+0000 mon.c (mon.1) 266 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.1b", "id": [7, 2]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.068219+0000 mon.c (mon.1) 267 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.2", "id": [1, 2]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.068219+0000 mon.c (mon.1) 267 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.2", "id": [1, 2]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.068288+0000 mon.c (mon.1) 268 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.7", "id": [1, 4]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.068288+0000 mon.c (mon.1) 268 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.7", "id": [1, 4]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.068347+0000 mon.c (mon.1) 269 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.9", "id": [1, 0]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.068347+0000 mon.c (mon.1) 269 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.9", "id": [1, 0]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.068410+0000 mon.c (mon.1) 270 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.068410+0000 mon.c (mon.1) 270 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.068479+0000 mon.c (mon.1) 271 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2, 3, 4]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.068479+0000 mon.c (mon.1) 271 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2, 3, 4]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.068552+0000 mon.c (mon.1) 272 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.15", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.068552+0000 mon.c (mon.1) 272 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.15", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.068620+0000 mon.c (mon.1) 273 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.19", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.068620+0000 mon.c (mon.1) 273 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.19", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.068773+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.2", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.068773+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.2", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.069043+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.1b", "id": [7, 2]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.069043+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.1b", "id": [7, 2]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.069213+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.2", "id": [1, 2]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.069213+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.2", "id": [1, 2]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.069409+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.7", "id": [1, 4]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.069409+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.7", "id": [1, 4]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.069567+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.9", "id": [1, 0]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.069567+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.9", "id": [1, 0]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.069729+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.069729+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.069887+0000 mon.a (mon.0) 368 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2, 3, 4]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.069887+0000 mon.a (mon.0) 368 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2, 3, 4]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.070002+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.15", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.070002+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.15", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.070211+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.19", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.070211+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.19", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.103822+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.103822+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.106159+0000 mon.c (mon.1) 274 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.106159+0000 mon.c (mon.1) 274 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:46:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.156180+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:48 vm00 bash[65531]: audit 2026-03-09T18:46:48.156180+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.068051+0000 mon.c (mon.1) 265 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.2", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.068051+0000 mon.c (mon.1) 265 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.2", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.068153+0000 mon.c (mon.1) 266 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.1b", "id": [7, 2]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.068153+0000 mon.c (mon.1) 266 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.1b", "id": [7, 2]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.068219+0000 mon.c (mon.1) 267 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.2", "id": [1, 2]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.068219+0000 mon.c (mon.1) 267 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.2", "id": [1, 2]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.068288+0000 mon.c (mon.1) 268 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.7", "id": [1, 4]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.068288+0000 mon.c (mon.1) 268 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.7", "id": [1, 4]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.068347+0000 mon.c (mon.1) 269 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.9", "id": [1, 0]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.068347+0000 mon.c (mon.1) 269 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.9", "id": [1, 0]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.068410+0000 mon.c (mon.1) 270 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.068410+0000 mon.c (mon.1) 270 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.068479+0000 mon.c (mon.1) 271 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2, 3, 4]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.068479+0000 mon.c (mon.1) 271 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2, 3, 4]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.068552+0000 mon.c (mon.1) 272 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.15", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.068552+0000 mon.c (mon.1) 272 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.15", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.068620+0000 mon.c (mon.1) 273 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.19", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.068620+0000 mon.c (mon.1) 273 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.19", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.068773+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.2", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.068773+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.2", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.069043+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.1b", "id": [7, 2]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.069043+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.1b", "id": [7, 2]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.069213+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.2", "id": [1, 2]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.069213+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.2", "id": [1, 2]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.069409+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.7", "id": [1, 4]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.069409+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.7", "id": [1, 4]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.069567+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.9", "id": [1, 0]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.069567+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.9", "id": [1, 0]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.069729+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.069729+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.069887+0000 mon.a (mon.0) 368 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2, 3, 4]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.069887+0000 mon.a (mon.0) 368 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2, 3, 4]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.070002+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.15", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.070002+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.15", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.070211+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.19", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.070211+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.19", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.103822+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.103822+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.106159+0000 mon.c (mon.1) 274 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.106159+0000 mon.c (mon.1) 274 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.156180+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:48.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:48 vm00 bash[69512]: audit 2026-03-09T18:46:48.156180+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:48.677 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.068051+0000 mon.c (mon.1) 265 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.2", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.677 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.068051+0000 mon.c (mon.1) 265 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.2", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.677 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.068153+0000 mon.c (mon.1) 266 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.1b", "id": [7, 2]}]: dispatch 2026-03-09T18:46:48.677 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.068153+0000 mon.c (mon.1) 266 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.1b", "id": [7, 2]}]: dispatch 2026-03-09T18:46:48.677 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.068219+0000 mon.c (mon.1) 267 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.2", "id": [1, 2]}]: dispatch 2026-03-09T18:46:48.677 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.068219+0000 mon.c (mon.1) 267 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.2", "id": [1, 2]}]: dispatch 2026-03-09T18:46:48.677 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.068288+0000 mon.c (mon.1) 268 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.7", "id": [1, 4]}]: dispatch 2026-03-09T18:46:48.677 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.068288+0000 mon.c (mon.1) 268 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.7", "id": [1, 4]}]: dispatch 2026-03-09T18:46:48.677 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.068347+0000 mon.c (mon.1) 269 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.9", "id": [1, 0]}]: dispatch 2026-03-09T18:46:48.677 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.068347+0000 mon.c (mon.1) 269 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.9", "id": [1, 0]}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.068410+0000 mon.c (mon.1) 270 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.068410+0000 mon.c (mon.1) 270 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.068479+0000 mon.c (mon.1) 271 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2, 3, 4]}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.068479+0000 mon.c (mon.1) 271 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2, 3, 4]}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.068552+0000 mon.c (mon.1) 272 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.15", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.068552+0000 mon.c (mon.1) 272 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.15", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.068620+0000 mon.c (mon.1) 273 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.19", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.068620+0000 mon.c (mon.1) 273 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.19", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.068773+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.2", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.068773+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.2", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.069043+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.1b", "id": [7, 2]}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.069043+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.1b", "id": [7, 2]}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.069213+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.2", "id": [1, 2]}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.069213+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.2", "id": [1, 2]}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.069409+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.7", "id": [1, 4]}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.069409+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.7", "id": [1, 4]}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.069567+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.9", "id": [1, 0]}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.069567+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.9", "id": [1, 0]}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.069729+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.069729+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.069887+0000 mon.a (mon.0) 368 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2, 3, 4]}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.069887+0000 mon.a (mon.0) 368 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2, 3, 4]}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.070002+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.15", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.070002+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.15", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.070211+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.19", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.070211+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.19", "id": [3, 4]}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.103822+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.103822+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.106159+0000 mon.c (mon.1) 274 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.106159+0000 mon.c (mon.1) 274 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.156180+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:48.678 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:48 vm08 bash[46122]: audit 2026-03-09T18:46:48.156180+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:48.974 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:46:48 vm08 bash[54020]: debug 2026-03-09T18:46:48.675+0000 7f62b6b88740 -1 Falling back to public interface 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:49 vm00 bash[65531]: cluster 2026-03-09T18:46:48.086487+0000 mgr.y (mgr.44107) 250 : cluster [DBG] pgmap v115: 161 pgs: 4 active+undersized, 20 stale+active+clean, 2 active+undersized+degraded, 135 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s; 6/627 objects degraded (0.957%) 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:49 vm00 bash[65531]: cluster 2026-03-09T18:46:48.086487+0000 mgr.y (mgr.44107) 250 : cluster [DBG] pgmap v115: 161 pgs: 4 active+undersized, 20 stale+active+clean, 2 active+undersized+degraded, 135 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s; 6/627 objects degraded (0.957%) 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:49 vm00 bash[65531]: cluster 2026-03-09T18:46:48.303349+0000 mon.a (mon.0) 373 : cluster [WRN] Health check failed: Degraded data redundancy: 6/627 objects degraded (0.957%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:49 vm00 bash[65531]: cluster 2026-03-09T18:46:48.303349+0000 mon.a (mon.0) 373 : cluster [WRN] Health check failed: Degraded data redundancy: 6/627 objects degraded (0.957%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:49 vm00 bash[65531]: audit 2026-03-09T18:46:48.314689+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.2", "id": [3, 4]}]': finished 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:49 vm00 bash[65531]: audit 2026-03-09T18:46:48.314689+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.2", "id": [3, 4]}]': finished 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:49 vm00 bash[65531]: audit 2026-03-09T18:46:48.314761+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.1b", "id": [7, 2]}]': finished 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:49 vm00 bash[65531]: audit 2026-03-09T18:46:48.314761+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.1b", "id": [7, 2]}]': finished 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:49 vm00 bash[65531]: audit 2026-03-09T18:46:48.314812+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.2", "id": [1, 2]}]': finished 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:49 vm00 bash[65531]: audit 2026-03-09T18:46:48.314812+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.2", "id": [1, 2]}]': finished 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:49 vm00 bash[65531]: audit 2026-03-09T18:46:48.314859+0000 mon.a (mon.0) 377 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.7", "id": [1, 4]}]': finished 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:49 vm00 bash[65531]: audit 2026-03-09T18:46:48.314859+0000 mon.a (mon.0) 377 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.7", "id": [1, 4]}]': finished 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:49 vm00 bash[65531]: audit 2026-03-09T18:46:48.314906+0000 mon.a (mon.0) 378 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.9", "id": [1, 0]}]': finished 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:49 vm00 bash[65531]: audit 2026-03-09T18:46:48.314906+0000 mon.a (mon.0) 378 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.9", "id": [1, 0]}]': finished 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:49 vm00 bash[65531]: audit 2026-03-09T18:46:48.314963+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [3, 4]}]': finished 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:49 vm00 bash[65531]: audit 2026-03-09T18:46:48.314963+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [3, 4]}]': finished 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:49 vm00 bash[65531]: audit 2026-03-09T18:46:48.315003+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2, 3, 4]}]': finished 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:49 vm00 bash[65531]: audit 2026-03-09T18:46:48.315003+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2, 3, 4]}]': finished 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:49 vm00 bash[65531]: audit 2026-03-09T18:46:48.315055+0000 mon.a (mon.0) 381 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.15", "id": [3, 4]}]': finished 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:49 vm00 bash[65531]: audit 2026-03-09T18:46:48.315055+0000 mon.a (mon.0) 381 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.15", "id": [3, 4]}]': finished 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:49 vm00 bash[65531]: audit 2026-03-09T18:46:48.315095+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.19", "id": [3, 4]}]': finished 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:49 vm00 bash[65531]: audit 2026-03-09T18:46:48.315095+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.19", "id": [3, 4]}]': finished 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:49 vm00 bash[65531]: cluster 2026-03-09T18:46:48.333486+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e124: 8 total, 7 up, 8 in 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:49 vm00 bash[65531]: cluster 2026-03-09T18:46:48.333486+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e124: 8 total, 7 up, 8 in 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:49 vm00 bash[65531]: cluster 2026-03-09T18:46:49.191788+0000 mon.a (mon.0) 384 : cluster [DBG] osdmap e125: 8 total, 7 up, 8 in 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:49 vm00 bash[65531]: cluster 2026-03-09T18:46:49.191788+0000 mon.a (mon.0) 384 : cluster [DBG] osdmap e125: 8 total, 7 up, 8 in 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:49 vm00 bash[69512]: cluster 2026-03-09T18:46:48.086487+0000 mgr.y (mgr.44107) 250 : cluster [DBG] pgmap v115: 161 pgs: 4 active+undersized, 20 stale+active+clean, 2 active+undersized+degraded, 135 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s; 6/627 objects degraded (0.957%) 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:49 vm00 bash[69512]: cluster 2026-03-09T18:46:48.086487+0000 mgr.y (mgr.44107) 250 : cluster [DBG] pgmap v115: 161 pgs: 4 active+undersized, 20 stale+active+clean, 2 active+undersized+degraded, 135 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s; 6/627 objects degraded (0.957%) 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:49 vm00 bash[69512]: cluster 2026-03-09T18:46:48.303349+0000 mon.a (mon.0) 373 : cluster [WRN] Health check failed: Degraded data redundancy: 6/627 objects degraded (0.957%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:49 vm00 bash[69512]: cluster 2026-03-09T18:46:48.303349+0000 mon.a (mon.0) 373 : cluster [WRN] Health check failed: Degraded data redundancy: 6/627 objects degraded (0.957%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:49 vm00 bash[69512]: audit 2026-03-09T18:46:48.314689+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.2", "id": [3, 4]}]': finished 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:49 vm00 bash[69512]: audit 2026-03-09T18:46:48.314689+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.2", "id": [3, 4]}]': finished 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:49 vm00 bash[69512]: audit 2026-03-09T18:46:48.314761+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.1b", "id": [7, 2]}]': finished 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:49 vm00 bash[69512]: audit 2026-03-09T18:46:48.314761+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.1b", "id": [7, 2]}]': finished 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:49 vm00 bash[69512]: audit 2026-03-09T18:46:48.314812+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.2", "id": [1, 2]}]': finished 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:49 vm00 bash[69512]: audit 2026-03-09T18:46:48.314812+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.2", "id": [1, 2]}]': finished 2026-03-09T18:46:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:49 vm00 bash[69512]: audit 2026-03-09T18:46:48.314859+0000 mon.a (mon.0) 377 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.7", "id": [1, 4]}]': finished 2026-03-09T18:46:49.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:49 vm00 bash[69512]: audit 2026-03-09T18:46:48.314859+0000 mon.a (mon.0) 377 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.7", "id": [1, 4]}]': finished 2026-03-09T18:46:49.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:49 vm00 bash[69512]: audit 2026-03-09T18:46:48.314906+0000 mon.a (mon.0) 378 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.9", "id": [1, 0]}]': finished 2026-03-09T18:46:49.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:49 vm00 bash[69512]: audit 2026-03-09T18:46:48.314906+0000 mon.a (mon.0) 378 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.9", "id": [1, 0]}]': finished 2026-03-09T18:46:49.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:49 vm00 bash[69512]: audit 2026-03-09T18:46:48.314963+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [3, 4]}]': finished 2026-03-09T18:46:49.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:49 vm00 bash[69512]: audit 2026-03-09T18:46:48.314963+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [3, 4]}]': finished 2026-03-09T18:46:49.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:49 vm00 bash[69512]: audit 2026-03-09T18:46:48.315003+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2, 3, 4]}]': finished 2026-03-09T18:46:49.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:49 vm00 bash[69512]: audit 2026-03-09T18:46:48.315003+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2, 3, 4]}]': finished 2026-03-09T18:46:49.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:49 vm00 bash[69512]: audit 2026-03-09T18:46:48.315055+0000 mon.a (mon.0) 381 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.15", "id": [3, 4]}]': finished 2026-03-09T18:46:49.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:49 vm00 bash[69512]: audit 2026-03-09T18:46:48.315055+0000 mon.a (mon.0) 381 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.15", "id": [3, 4]}]': finished 2026-03-09T18:46:49.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:49 vm00 bash[69512]: audit 2026-03-09T18:46:48.315095+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.19", "id": [3, 4]}]': finished 2026-03-09T18:46:49.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:49 vm00 bash[69512]: audit 2026-03-09T18:46:48.315095+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.19", "id": [3, 4]}]': finished 2026-03-09T18:46:49.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:49 vm00 bash[69512]: cluster 2026-03-09T18:46:48.333486+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e124: 8 total, 7 up, 8 in 2026-03-09T18:46:49.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:49 vm00 bash[69512]: cluster 2026-03-09T18:46:48.333486+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e124: 8 total, 7 up, 8 in 2026-03-09T18:46:49.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:49 vm00 bash[69512]: cluster 2026-03-09T18:46:49.191788+0000 mon.a (mon.0) 384 : cluster [DBG] osdmap e125: 8 total, 7 up, 8 in 2026-03-09T18:46:49.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:49 vm00 bash[69512]: cluster 2026-03-09T18:46:49.191788+0000 mon.a (mon.0) 384 : cluster [DBG] osdmap e125: 8 total, 7 up, 8 in 2026-03-09T18:46:49.630 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:46:49 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:46:49] "GET /metrics HTTP/1.1" 200 37843 "" "Prometheus/2.51.0" 2026-03-09T18:46:49.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:49 vm08 bash[46122]: cluster 2026-03-09T18:46:48.086487+0000 mgr.y (mgr.44107) 250 : cluster [DBG] pgmap v115: 161 pgs: 4 active+undersized, 20 stale+active+clean, 2 active+undersized+degraded, 135 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s; 6/627 objects degraded (0.957%) 2026-03-09T18:46:49.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:49 vm08 bash[46122]: cluster 2026-03-09T18:46:48.086487+0000 mgr.y (mgr.44107) 250 : cluster [DBG] pgmap v115: 161 pgs: 4 active+undersized, 20 stale+active+clean, 2 active+undersized+degraded, 135 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s; 6/627 objects degraded (0.957%) 2026-03-09T18:46:49.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:49 vm08 bash[46122]: cluster 2026-03-09T18:46:48.303349+0000 mon.a (mon.0) 373 : cluster [WRN] Health check failed: Degraded data redundancy: 6/627 objects degraded (0.957%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:49.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:49 vm08 bash[46122]: cluster 2026-03-09T18:46:48.303349+0000 mon.a (mon.0) 373 : cluster [WRN] Health check failed: Degraded data redundancy: 6/627 objects degraded (0.957%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:49.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:49 vm08 bash[46122]: audit 2026-03-09T18:46:48.314689+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.2", "id": [3, 4]}]': finished 2026-03-09T18:46:49.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:49 vm08 bash[46122]: audit 2026-03-09T18:46:48.314689+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.2", "id": [3, 4]}]': finished 2026-03-09T18:46:49.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:49 vm08 bash[46122]: audit 2026-03-09T18:46:48.314761+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.1b", "id": [7, 2]}]': finished 2026-03-09T18:46:49.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:49 vm08 bash[46122]: audit 2026-03-09T18:46:48.314761+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.1b", "id": [7, 2]}]': finished 2026-03-09T18:46:49.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:49 vm08 bash[46122]: audit 2026-03-09T18:46:48.314812+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.2", "id": [1, 2]}]': finished 2026-03-09T18:46:49.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:49 vm08 bash[46122]: audit 2026-03-09T18:46:48.314812+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.2", "id": [1, 2]}]': finished 2026-03-09T18:46:49.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:49 vm08 bash[46122]: audit 2026-03-09T18:46:48.314859+0000 mon.a (mon.0) 377 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.7", "id": [1, 4]}]': finished 2026-03-09T18:46:49.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:49 vm08 bash[46122]: audit 2026-03-09T18:46:48.314859+0000 mon.a (mon.0) 377 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.7", "id": [1, 4]}]': finished 2026-03-09T18:46:49.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:49 vm08 bash[46122]: audit 2026-03-09T18:46:48.314906+0000 mon.a (mon.0) 378 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.9", "id": [1, 0]}]': finished 2026-03-09T18:46:49.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:49 vm08 bash[46122]: audit 2026-03-09T18:46:48.314906+0000 mon.a (mon.0) 378 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.9", "id": [1, 0]}]': finished 2026-03-09T18:46:49.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:49 vm08 bash[46122]: audit 2026-03-09T18:46:48.314963+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [3, 4]}]': finished 2026-03-09T18:46:49.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:49 vm08 bash[46122]: audit 2026-03-09T18:46:48.314963+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [3, 4]}]': finished 2026-03-09T18:46:49.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:49 vm08 bash[46122]: audit 2026-03-09T18:46:48.315003+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2, 3, 4]}]': finished 2026-03-09T18:46:49.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:49 vm08 bash[46122]: audit 2026-03-09T18:46:48.315003+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2, 3, 4]}]': finished 2026-03-09T18:46:49.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:49 vm08 bash[46122]: audit 2026-03-09T18:46:48.315055+0000 mon.a (mon.0) 381 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.15", "id": [3, 4]}]': finished 2026-03-09T18:46:49.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:49 vm08 bash[46122]: audit 2026-03-09T18:46:48.315055+0000 mon.a (mon.0) 381 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.15", "id": [3, 4]}]': finished 2026-03-09T18:46:49.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:49 vm08 bash[46122]: audit 2026-03-09T18:46:48.315095+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.19", "id": [3, 4]}]': finished 2026-03-09T18:46:49.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:49 vm08 bash[46122]: audit 2026-03-09T18:46:48.315095+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.19", "id": [3, 4]}]': finished 2026-03-09T18:46:49.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:49 vm08 bash[46122]: cluster 2026-03-09T18:46:48.333486+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e124: 8 total, 7 up, 8 in 2026-03-09T18:46:49.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:49 vm08 bash[46122]: cluster 2026-03-09T18:46:48.333486+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e124: 8 total, 7 up, 8 in 2026-03-09T18:46:49.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:49 vm08 bash[46122]: cluster 2026-03-09T18:46:49.191788+0000 mon.a (mon.0) 384 : cluster [DBG] osdmap e125: 8 total, 7 up, 8 in 2026-03-09T18:46:49.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:49 vm08 bash[46122]: cluster 2026-03-09T18:46:49.191788+0000 mon.a (mon.0) 384 : cluster [DBG] osdmap e125: 8 total, 7 up, 8 in 2026-03-09T18:46:50.724 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:46:50 vm08 bash[54020]: debug 2026-03-09T18:46:50.419+0000 7f62b6b88740 -1 osd.4 0 read_superblock omap replica is missing. 2026-03-09T18:46:50.724 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:46:50 vm08 bash[54020]: debug 2026-03-09T18:46:50.479+0000 7f62b6b88740 -1 osd.4 121 log_to_monitors true 2026-03-09T18:46:51.474 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:46:51 vm08 bash[54020]: debug 2026-03-09T18:46:51.335+0000 7f62ae933640 -1 osd.4 121 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:46:51.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:51 vm08 bash[46122]: cluster 2026-03-09T18:46:50.086904+0000 mgr.y (mgr.44107) 251 : cluster [DBG] pgmap v118: 161 pgs: 2 remapped+peering, 1 unknown, 3 peering, 34 active+undersized, 6 stale+active+clean, 21 active+undersized+degraded, 94 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 97/627 objects degraded (15.470%) 2026-03-09T18:46:51.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:51 vm08 bash[46122]: cluster 2026-03-09T18:46:50.086904+0000 mgr.y (mgr.44107) 251 : cluster [DBG] pgmap v118: 161 pgs: 2 remapped+peering, 1 unknown, 3 peering, 34 active+undersized, 6 stale+active+clean, 21 active+undersized+degraded, 94 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 97/627 objects degraded (15.470%) 2026-03-09T18:46:51.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:51 vm08 bash[46122]: cluster 2026-03-09T18:46:50.182635+0000 mon.a (mon.0) 385 : cluster [DBG] osdmap e126: 8 total, 7 up, 8 in 2026-03-09T18:46:51.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:51 vm08 bash[46122]: cluster 2026-03-09T18:46:50.182635+0000 mon.a (mon.0) 385 : cluster [DBG] osdmap e126: 8 total, 7 up, 8 in 2026-03-09T18:46:51.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:51 vm08 bash[46122]: cluster 2026-03-09T18:46:50.307504+0000 mon.a (mon.0) 386 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T18:46:51.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:51 vm08 bash[46122]: cluster 2026-03-09T18:46:50.307504+0000 mon.a (mon.0) 386 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T18:46:51.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:51 vm08 bash[46122]: audit 2026-03-09T18:46:50.488930+0000 mon.b (mon.2) 16 : audit [INF] from='osd.4 [v2:192.168.123.108:6800/1033056358,v1:192.168.123.108:6801/1033056358]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T18:46:51.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:51 vm08 bash[46122]: audit 2026-03-09T18:46:50.488930+0000 mon.b (mon.2) 16 : audit [INF] from='osd.4 [v2:192.168.123.108:6800/1033056358,v1:192.168.123.108:6801/1033056358]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T18:46:51.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:51 vm08 bash[46122]: audit 2026-03-09T18:46:50.492276+0000 mon.a (mon.0) 387 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T18:46:51.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:51 vm08 bash[46122]: audit 2026-03-09T18:46:50.492276+0000 mon.a (mon.0) 387 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T18:46:51.573 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:51 vm00 bash[65531]: cluster 2026-03-09T18:46:50.086904+0000 mgr.y (mgr.44107) 251 : cluster [DBG] pgmap v118: 161 pgs: 2 remapped+peering, 1 unknown, 3 peering, 34 active+undersized, 6 stale+active+clean, 21 active+undersized+degraded, 94 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 97/627 objects degraded (15.470%) 2026-03-09T18:46:51.573 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:51 vm00 bash[65531]: cluster 2026-03-09T18:46:50.086904+0000 mgr.y (mgr.44107) 251 : cluster [DBG] pgmap v118: 161 pgs: 2 remapped+peering, 1 unknown, 3 peering, 34 active+undersized, 6 stale+active+clean, 21 active+undersized+degraded, 94 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 97/627 objects degraded (15.470%) 2026-03-09T18:46:51.573 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:51 vm00 bash[65531]: cluster 2026-03-09T18:46:50.182635+0000 mon.a (mon.0) 385 : cluster [DBG] osdmap e126: 8 total, 7 up, 8 in 2026-03-09T18:46:51.573 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:51 vm00 bash[65531]: cluster 2026-03-09T18:46:50.182635+0000 mon.a (mon.0) 385 : cluster [DBG] osdmap e126: 8 total, 7 up, 8 in 2026-03-09T18:46:51.574 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:51 vm00 bash[65531]: cluster 2026-03-09T18:46:50.307504+0000 mon.a (mon.0) 386 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T18:46:51.574 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:51 vm00 bash[65531]: cluster 2026-03-09T18:46:50.307504+0000 mon.a (mon.0) 386 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T18:46:51.574 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:51 vm00 bash[65531]: audit 2026-03-09T18:46:50.488930+0000 mon.b (mon.2) 16 : audit [INF] from='osd.4 [v2:192.168.123.108:6800/1033056358,v1:192.168.123.108:6801/1033056358]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T18:46:51.574 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:51 vm00 bash[65531]: audit 2026-03-09T18:46:50.488930+0000 mon.b (mon.2) 16 : audit [INF] from='osd.4 [v2:192.168.123.108:6800/1033056358,v1:192.168.123.108:6801/1033056358]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T18:46:51.574 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:51 vm00 bash[65531]: audit 2026-03-09T18:46:50.492276+0000 mon.a (mon.0) 387 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T18:46:51.574 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:51 vm00 bash[65531]: audit 2026-03-09T18:46:50.492276+0000 mon.a (mon.0) 387 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T18:46:51.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:51 vm00 bash[69512]: cluster 2026-03-09T18:46:50.086904+0000 mgr.y (mgr.44107) 251 : cluster [DBG] pgmap v118: 161 pgs: 2 remapped+peering, 1 unknown, 3 peering, 34 active+undersized, 6 stale+active+clean, 21 active+undersized+degraded, 94 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 97/627 objects degraded (15.470%) 2026-03-09T18:46:51.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:51 vm00 bash[69512]: cluster 2026-03-09T18:46:50.086904+0000 mgr.y (mgr.44107) 251 : cluster [DBG] pgmap v118: 161 pgs: 2 remapped+peering, 1 unknown, 3 peering, 34 active+undersized, 6 stale+active+clean, 21 active+undersized+degraded, 94 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 97/627 objects degraded (15.470%) 2026-03-09T18:46:51.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:51 vm00 bash[69512]: cluster 2026-03-09T18:46:50.182635+0000 mon.a (mon.0) 385 : cluster [DBG] osdmap e126: 8 total, 7 up, 8 in 2026-03-09T18:46:51.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:51 vm00 bash[69512]: cluster 2026-03-09T18:46:50.182635+0000 mon.a (mon.0) 385 : cluster [DBG] osdmap e126: 8 total, 7 up, 8 in 2026-03-09T18:46:51.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:51 vm00 bash[69512]: cluster 2026-03-09T18:46:50.307504+0000 mon.a (mon.0) 386 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T18:46:51.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:51 vm00 bash[69512]: cluster 2026-03-09T18:46:50.307504+0000 mon.a (mon.0) 386 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T18:46:51.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:51 vm00 bash[69512]: audit 2026-03-09T18:46:50.488930+0000 mon.b (mon.2) 16 : audit [INF] from='osd.4 [v2:192.168.123.108:6800/1033056358,v1:192.168.123.108:6801/1033056358]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T18:46:51.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:51 vm00 bash[69512]: audit 2026-03-09T18:46:50.488930+0000 mon.b (mon.2) 16 : audit [INF] from='osd.4 [v2:192.168.123.108:6800/1033056358,v1:192.168.123.108:6801/1033056358]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T18:46:51.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:51 vm00 bash[69512]: audit 2026-03-09T18:46:50.492276+0000 mon.a (mon.0) 387 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T18:46:51.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:51 vm00 bash[69512]: audit 2026-03-09T18:46:50.492276+0000 mon.a (mon.0) 387 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T18:46:52.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:52 vm00 bash[65531]: audit 2026-03-09T18:46:51.313210+0000 mon.a (mon.0) 388 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T18:46:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:52 vm00 bash[65531]: audit 2026-03-09T18:46:51.313210+0000 mon.a (mon.0) 388 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T18:46:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:52 vm00 bash[65531]: audit 2026-03-09T18:46:51.314156+0000 mon.b (mon.2) 17 : audit [INF] from='osd.4 [v2:192.168.123.108:6800/1033056358,v1:192.168.123.108:6801/1033056358]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:46:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:52 vm00 bash[65531]: audit 2026-03-09T18:46:51.314156+0000 mon.b (mon.2) 17 : audit [INF] from='osd.4 [v2:192.168.123.108:6800/1033056358,v1:192.168.123.108:6801/1033056358]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:46:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:52 vm00 bash[65531]: cluster 2026-03-09T18:46:51.318802+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e127: 8 total, 7 up, 8 in 2026-03-09T18:46:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:52 vm00 bash[65531]: cluster 2026-03-09T18:46:51.318802+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e127: 8 total, 7 up, 8 in 2026-03-09T18:46:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:52 vm00 bash[65531]: audit 2026-03-09T18:46:51.324177+0000 mon.a (mon.0) 390 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:46:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:52 vm00 bash[65531]: audit 2026-03-09T18:46:51.324177+0000 mon.a (mon.0) 390 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:46:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:52 vm00 bash[69512]: audit 2026-03-09T18:46:51.313210+0000 mon.a (mon.0) 388 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T18:46:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:52 vm00 bash[69512]: audit 2026-03-09T18:46:51.313210+0000 mon.a (mon.0) 388 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T18:46:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:52 vm00 bash[69512]: audit 2026-03-09T18:46:51.314156+0000 mon.b (mon.2) 17 : audit [INF] from='osd.4 [v2:192.168.123.108:6800/1033056358,v1:192.168.123.108:6801/1033056358]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:46:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:52 vm00 bash[69512]: audit 2026-03-09T18:46:51.314156+0000 mon.b (mon.2) 17 : audit [INF] from='osd.4 [v2:192.168.123.108:6800/1033056358,v1:192.168.123.108:6801/1033056358]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:46:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:52 vm00 bash[69512]: cluster 2026-03-09T18:46:51.318802+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e127: 8 total, 7 up, 8 in 2026-03-09T18:46:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:52 vm00 bash[69512]: cluster 2026-03-09T18:46:51.318802+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e127: 8 total, 7 up, 8 in 2026-03-09T18:46:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:52 vm00 bash[69512]: audit 2026-03-09T18:46:51.324177+0000 mon.a (mon.0) 390 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:46:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:52 vm00 bash[69512]: audit 2026-03-09T18:46:51.324177+0000 mon.a (mon.0) 390 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:46:52.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:52 vm08 bash[46122]: audit 2026-03-09T18:46:51.313210+0000 mon.a (mon.0) 388 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T18:46:52.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:52 vm08 bash[46122]: audit 2026-03-09T18:46:51.313210+0000 mon.a (mon.0) 388 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T18:46:52.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:52 vm08 bash[46122]: audit 2026-03-09T18:46:51.314156+0000 mon.b (mon.2) 17 : audit [INF] from='osd.4 [v2:192.168.123.108:6800/1033056358,v1:192.168.123.108:6801/1033056358]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:46:52.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:52 vm08 bash[46122]: audit 2026-03-09T18:46:51.314156+0000 mon.b (mon.2) 17 : audit [INF] from='osd.4 [v2:192.168.123.108:6800/1033056358,v1:192.168.123.108:6801/1033056358]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:46:52.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:52 vm08 bash[46122]: cluster 2026-03-09T18:46:51.318802+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e127: 8 total, 7 up, 8 in 2026-03-09T18:46:52.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:52 vm08 bash[46122]: cluster 2026-03-09T18:46:51.318802+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e127: 8 total, 7 up, 8 in 2026-03-09T18:46:52.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:52 vm08 bash[46122]: audit 2026-03-09T18:46:51.324177+0000 mon.a (mon.0) 390 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:46:52.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:52 vm08 bash[46122]: audit 2026-03-09T18:46:51.324177+0000 mon.a (mon.0) 390 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:46:53.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:53 vm00 bash[65531]: audit 2026-03-09T18:46:51.577077+0000 mgr.y (mgr.44107) 252 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:53 vm00 bash[65531]: audit 2026-03-09T18:46:51.577077+0000 mgr.y (mgr.44107) 252 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:53 vm00 bash[65531]: cluster 2026-03-09T18:46:52.087295+0000 mgr.y (mgr.44107) 253 : cluster [DBG] pgmap v121: 161 pgs: 2 remapped+peering, 1 unknown, 3 peering, 42 active+undersized, 23 active+undersized+degraded, 90 active+clean; 457 KiB data, 203 MiB used, 160 GiB / 160 GiB avail; 102/627 objects degraded (16.268%) 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:53 vm00 bash[65531]: cluster 2026-03-09T18:46:52.087295+0000 mgr.y (mgr.44107) 253 : cluster [DBG] pgmap v121: 161 pgs: 2 remapped+peering, 1 unknown, 3 peering, 42 active+undersized, 23 active+undersized+degraded, 90 active+clean; 457 KiB data, 203 MiB used, 160 GiB / 160 GiB avail; 102/627 objects degraded (16.268%) 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:53 vm00 bash[65531]: cluster 2026-03-09T18:46:52.314629+0000 mon.a (mon.0) 391 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:53 vm00 bash[65531]: cluster 2026-03-09T18:46:52.314629+0000 mon.a (mon.0) 391 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:53 vm00 bash[65531]: cluster 2026-03-09T18:46:52.354169+0000 mon.a (mon.0) 392 : cluster [INF] osd.4 [v2:192.168.123.108:6800/1033056358,v1:192.168.123.108:6801/1033056358] boot 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:53 vm00 bash[65531]: cluster 2026-03-09T18:46:52.354169+0000 mon.a (mon.0) 392 : cluster [INF] osd.4 [v2:192.168.123.108:6800/1033056358,v1:192.168.123.108:6801/1033056358] boot 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:53 vm00 bash[65531]: cluster 2026-03-09T18:46:52.354247+0000 mon.a (mon.0) 393 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:53 vm00 bash[65531]: cluster 2026-03-09T18:46:52.354247+0000 mon.a (mon.0) 393 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:53 vm00 bash[65531]: audit 2026-03-09T18:46:52.359013+0000 mon.c (mon.1) 275 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:53 vm00 bash[65531]: audit 2026-03-09T18:46:52.359013+0000 mon.c (mon.1) 275 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:53 vm00 bash[65531]: audit 2026-03-09T18:46:52.830914+0000 mon.a (mon.0) 394 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:53 vm00 bash[65531]: audit 2026-03-09T18:46:52.830914+0000 mon.a (mon.0) 394 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:53 vm00 bash[65531]: audit 2026-03-09T18:46:52.838254+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:53 vm00 bash[65531]: audit 2026-03-09T18:46:52.838254+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:53 vm00 bash[69512]: audit 2026-03-09T18:46:51.577077+0000 mgr.y (mgr.44107) 252 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:53 vm00 bash[69512]: audit 2026-03-09T18:46:51.577077+0000 mgr.y (mgr.44107) 252 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:53 vm00 bash[69512]: cluster 2026-03-09T18:46:52.087295+0000 mgr.y (mgr.44107) 253 : cluster [DBG] pgmap v121: 161 pgs: 2 remapped+peering, 1 unknown, 3 peering, 42 active+undersized, 23 active+undersized+degraded, 90 active+clean; 457 KiB data, 203 MiB used, 160 GiB / 160 GiB avail; 102/627 objects degraded (16.268%) 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:53 vm00 bash[69512]: cluster 2026-03-09T18:46:52.087295+0000 mgr.y (mgr.44107) 253 : cluster [DBG] pgmap v121: 161 pgs: 2 remapped+peering, 1 unknown, 3 peering, 42 active+undersized, 23 active+undersized+degraded, 90 active+clean; 457 KiB data, 203 MiB used, 160 GiB / 160 GiB avail; 102/627 objects degraded (16.268%) 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:53 vm00 bash[69512]: cluster 2026-03-09T18:46:52.314629+0000 mon.a (mon.0) 391 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:53 vm00 bash[69512]: cluster 2026-03-09T18:46:52.314629+0000 mon.a (mon.0) 391 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:53 vm00 bash[69512]: cluster 2026-03-09T18:46:52.354169+0000 mon.a (mon.0) 392 : cluster [INF] osd.4 [v2:192.168.123.108:6800/1033056358,v1:192.168.123.108:6801/1033056358] boot 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:53 vm00 bash[69512]: cluster 2026-03-09T18:46:52.354169+0000 mon.a (mon.0) 392 : cluster [INF] osd.4 [v2:192.168.123.108:6800/1033056358,v1:192.168.123.108:6801/1033056358] boot 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:53 vm00 bash[69512]: cluster 2026-03-09T18:46:52.354247+0000 mon.a (mon.0) 393 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:53 vm00 bash[69512]: cluster 2026-03-09T18:46:52.354247+0000 mon.a (mon.0) 393 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:53 vm00 bash[69512]: audit 2026-03-09T18:46:52.359013+0000 mon.c (mon.1) 275 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:53 vm00 bash[69512]: audit 2026-03-09T18:46:52.359013+0000 mon.c (mon.1) 275 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:53 vm00 bash[69512]: audit 2026-03-09T18:46:52.830914+0000 mon.a (mon.0) 394 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:53 vm00 bash[69512]: audit 2026-03-09T18:46:52.830914+0000 mon.a (mon.0) 394 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:53 vm00 bash[69512]: audit 2026-03-09T18:46:52.838254+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:53 vm00 bash[69512]: audit 2026-03-09T18:46:52.838254+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:53 vm08 bash[46122]: audit 2026-03-09T18:46:51.577077+0000 mgr.y (mgr.44107) 252 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:53 vm08 bash[46122]: audit 2026-03-09T18:46:51.577077+0000 mgr.y (mgr.44107) 252 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:46:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:53 vm08 bash[46122]: cluster 2026-03-09T18:46:52.087295+0000 mgr.y (mgr.44107) 253 : cluster [DBG] pgmap v121: 161 pgs: 2 remapped+peering, 1 unknown, 3 peering, 42 active+undersized, 23 active+undersized+degraded, 90 active+clean; 457 KiB data, 203 MiB used, 160 GiB / 160 GiB avail; 102/627 objects degraded (16.268%) 2026-03-09T18:46:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:53 vm08 bash[46122]: cluster 2026-03-09T18:46:52.087295+0000 mgr.y (mgr.44107) 253 : cluster [DBG] pgmap v121: 161 pgs: 2 remapped+peering, 1 unknown, 3 peering, 42 active+undersized, 23 active+undersized+degraded, 90 active+clean; 457 KiB data, 203 MiB used, 160 GiB / 160 GiB avail; 102/627 objects degraded (16.268%) 2026-03-09T18:46:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:53 vm08 bash[46122]: cluster 2026-03-09T18:46:52.314629+0000 mon.a (mon.0) 391 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:46:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:53 vm08 bash[46122]: cluster 2026-03-09T18:46:52.314629+0000 mon.a (mon.0) 391 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:46:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:53 vm08 bash[46122]: cluster 2026-03-09T18:46:52.354169+0000 mon.a (mon.0) 392 : cluster [INF] osd.4 [v2:192.168.123.108:6800/1033056358,v1:192.168.123.108:6801/1033056358] boot 2026-03-09T18:46:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:53 vm08 bash[46122]: cluster 2026-03-09T18:46:52.354169+0000 mon.a (mon.0) 392 : cluster [INF] osd.4 [v2:192.168.123.108:6800/1033056358,v1:192.168.123.108:6801/1033056358] boot 2026-03-09T18:46:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:53 vm08 bash[46122]: cluster 2026-03-09T18:46:52.354247+0000 mon.a (mon.0) 393 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-09T18:46:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:53 vm08 bash[46122]: cluster 2026-03-09T18:46:52.354247+0000 mon.a (mon.0) 393 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-09T18:46:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:53 vm08 bash[46122]: audit 2026-03-09T18:46:52.359013+0000 mon.c (mon.1) 275 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:46:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:53 vm08 bash[46122]: audit 2026-03-09T18:46:52.359013+0000 mon.c (mon.1) 275 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T18:46:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:53 vm08 bash[46122]: audit 2026-03-09T18:46:52.830914+0000 mon.a (mon.0) 394 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:53 vm08 bash[46122]: audit 2026-03-09T18:46:52.830914+0000 mon.a (mon.0) 394 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:53 vm08 bash[46122]: audit 2026-03-09T18:46:52.838254+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:53 vm08 bash[46122]: audit 2026-03-09T18:46:52.838254+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:54.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:54 vm00 bash[65531]: cluster 2026-03-09T18:46:53.348211+0000 mon.a (mon.0) 396 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-09T18:46:54.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:54 vm00 bash[65531]: cluster 2026-03-09T18:46:53.348211+0000 mon.a (mon.0) 396 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-09T18:46:54.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:54 vm00 bash[65531]: audit 2026-03-09T18:46:53.426288+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:54 vm00 bash[65531]: audit 2026-03-09T18:46:53.426288+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:54 vm00 bash[65531]: audit 2026-03-09T18:46:53.432492+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:54 vm00 bash[65531]: audit 2026-03-09T18:46:53.432492+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:54 vm00 bash[65531]: cluster 2026-03-09T18:46:54.171177+0000 mon.a (mon.0) 399 : cluster [WRN] Health check update: Degraded data redundancy: 102/627 objects degraded (16.268%), 23 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:54 vm00 bash[65531]: cluster 2026-03-09T18:46:54.171177+0000 mon.a (mon.0) 399 : cluster [WRN] Health check update: Degraded data redundancy: 102/627 objects degraded (16.268%), 23 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:54 vm00 bash[65531]: cluster 2026-03-09T18:46:54.349667+0000 mon.a (mon.0) 400 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-09T18:46:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:54 vm00 bash[65531]: cluster 2026-03-09T18:46:54.349667+0000 mon.a (mon.0) 400 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-09T18:46:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:54 vm00 bash[69512]: cluster 2026-03-09T18:46:53.348211+0000 mon.a (mon.0) 396 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-09T18:46:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:54 vm00 bash[69512]: cluster 2026-03-09T18:46:53.348211+0000 mon.a (mon.0) 396 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-09T18:46:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:54 vm00 bash[69512]: audit 2026-03-09T18:46:53.426288+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:54 vm00 bash[69512]: audit 2026-03-09T18:46:53.426288+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:54 vm00 bash[69512]: audit 2026-03-09T18:46:53.432492+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:54 vm00 bash[69512]: audit 2026-03-09T18:46:53.432492+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:54 vm00 bash[69512]: cluster 2026-03-09T18:46:54.171177+0000 mon.a (mon.0) 399 : cluster [WRN] Health check update: Degraded data redundancy: 102/627 objects degraded (16.268%), 23 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:54 vm00 bash[69512]: cluster 2026-03-09T18:46:54.171177+0000 mon.a (mon.0) 399 : cluster [WRN] Health check update: Degraded data redundancy: 102/627 objects degraded (16.268%), 23 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:54 vm00 bash[69512]: cluster 2026-03-09T18:46:54.349667+0000 mon.a (mon.0) 400 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-09T18:46:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:54 vm00 bash[69512]: cluster 2026-03-09T18:46:54.349667+0000 mon.a (mon.0) 400 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-09T18:46:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:54 vm08 bash[46122]: cluster 2026-03-09T18:46:53.348211+0000 mon.a (mon.0) 396 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-09T18:46:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:54 vm08 bash[46122]: cluster 2026-03-09T18:46:53.348211+0000 mon.a (mon.0) 396 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-09T18:46:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:54 vm08 bash[46122]: audit 2026-03-09T18:46:53.426288+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:54 vm08 bash[46122]: audit 2026-03-09T18:46:53.426288+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:54 vm08 bash[46122]: audit 2026-03-09T18:46:53.432492+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:54 vm08 bash[46122]: audit 2026-03-09T18:46:53.432492+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:54 vm08 bash[46122]: cluster 2026-03-09T18:46:54.171177+0000 mon.a (mon.0) 399 : cluster [WRN] Health check update: Degraded data redundancy: 102/627 objects degraded (16.268%), 23 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:54 vm08 bash[46122]: cluster 2026-03-09T18:46:54.171177+0000 mon.a (mon.0) 399 : cluster [WRN] Health check update: Degraded data redundancy: 102/627 objects degraded (16.268%), 23 pgs degraded (PG_DEGRADED) 2026-03-09T18:46:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:54 vm08 bash[46122]: cluster 2026-03-09T18:46:54.349667+0000 mon.a (mon.0) 400 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-09T18:46:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:54 vm08 bash[46122]: cluster 2026-03-09T18:46:54.349667+0000 mon.a (mon.0) 400 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-09T18:46:55.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:55 vm08 bash[46122]: cluster 2026-03-09T18:46:54.087735+0000 mgr.y (mgr.44107) 254 : cluster [DBG] pgmap v124: 161 pgs: 3 remapped, 2 remapped+peering, 1 unknown, 6 peering, 39 active+undersized, 23 active+undersized+degraded, 87 active+clean; 457 KiB data, 203 MiB used, 160 GiB / 160 GiB avail; 102/627 objects degraded (16.268%); 0 B/s, 0 objects/s recovering 2026-03-09T18:46:55.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:55 vm08 bash[46122]: cluster 2026-03-09T18:46:54.087735+0000 mgr.y (mgr.44107) 254 : cluster [DBG] pgmap v124: 161 pgs: 3 remapped, 2 remapped+peering, 1 unknown, 6 peering, 39 active+undersized, 23 active+undersized+degraded, 87 active+clean; 457 KiB data, 203 MiB used, 160 GiB / 160 GiB avail; 102/627 objects degraded (16.268%); 0 B/s, 0 objects/s recovering 2026-03-09T18:46:55.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:55 vm00 bash[65531]: cluster 2026-03-09T18:46:54.087735+0000 mgr.y (mgr.44107) 254 : cluster [DBG] pgmap v124: 161 pgs: 3 remapped, 2 remapped+peering, 1 unknown, 6 peering, 39 active+undersized, 23 active+undersized+degraded, 87 active+clean; 457 KiB data, 203 MiB used, 160 GiB / 160 GiB avail; 102/627 objects degraded (16.268%); 0 B/s, 0 objects/s recovering 2026-03-09T18:46:55.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:55 vm00 bash[65531]: cluster 2026-03-09T18:46:54.087735+0000 mgr.y (mgr.44107) 254 : cluster [DBG] pgmap v124: 161 pgs: 3 remapped, 2 remapped+peering, 1 unknown, 6 peering, 39 active+undersized, 23 active+undersized+degraded, 87 active+clean; 457 KiB data, 203 MiB used, 160 GiB / 160 GiB avail; 102/627 objects degraded (16.268%); 0 B/s, 0 objects/s recovering 2026-03-09T18:46:55.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:55 vm00 bash[69512]: cluster 2026-03-09T18:46:54.087735+0000 mgr.y (mgr.44107) 254 : cluster [DBG] pgmap v124: 161 pgs: 3 remapped, 2 remapped+peering, 1 unknown, 6 peering, 39 active+undersized, 23 active+undersized+degraded, 87 active+clean; 457 KiB data, 203 MiB used, 160 GiB / 160 GiB avail; 102/627 objects degraded (16.268%); 0 B/s, 0 objects/s recovering 2026-03-09T18:46:55.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:55 vm00 bash[69512]: cluster 2026-03-09T18:46:54.087735+0000 mgr.y (mgr.44107) 254 : cluster [DBG] pgmap v124: 161 pgs: 3 remapped, 2 remapped+peering, 1 unknown, 6 peering, 39 active+undersized, 23 active+undersized+degraded, 87 active+clean; 457 KiB data, 203 MiB used, 160 GiB / 160 GiB avail; 102/627 objects degraded (16.268%); 0 B/s, 0 objects/s recovering 2026-03-09T18:46:56.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:56 vm08 bash[46122]: cluster 2026-03-09T18:46:56.088156+0000 mgr.y (mgr.44107) 255 : cluster [DBG] pgmap v126: 161 pgs: 3 remapped, 1 remapped+peering, 1 unknown, 6 peering, 13 active+undersized, 9 active+undersized+degraded, 128 active+clean; 457 KiB data, 204 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s; 28/627 objects degraded (4.466%); 23 B/s, 0 objects/s recovering 2026-03-09T18:46:56.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:56 vm08 bash[46122]: cluster 2026-03-09T18:46:56.088156+0000 mgr.y (mgr.44107) 255 : cluster [DBG] pgmap v126: 161 pgs: 3 remapped, 1 remapped+peering, 1 unknown, 6 peering, 13 active+undersized, 9 active+undersized+degraded, 128 active+clean; 457 KiB data, 204 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s; 28/627 objects degraded (4.466%); 23 B/s, 0 objects/s recovering 2026-03-09T18:46:56.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:56 vm00 bash[65531]: cluster 2026-03-09T18:46:56.088156+0000 mgr.y (mgr.44107) 255 : cluster [DBG] pgmap v126: 161 pgs: 3 remapped, 1 remapped+peering, 1 unknown, 6 peering, 13 active+undersized, 9 active+undersized+degraded, 128 active+clean; 457 KiB data, 204 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s; 28/627 objects degraded (4.466%); 23 B/s, 0 objects/s recovering 2026-03-09T18:46:56.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:56 vm00 bash[65531]: cluster 2026-03-09T18:46:56.088156+0000 mgr.y (mgr.44107) 255 : cluster [DBG] pgmap v126: 161 pgs: 3 remapped, 1 remapped+peering, 1 unknown, 6 peering, 13 active+undersized, 9 active+undersized+degraded, 128 active+clean; 457 KiB data, 204 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s; 28/627 objects degraded (4.466%); 23 B/s, 0 objects/s recovering 2026-03-09T18:46:56.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:56 vm00 bash[69512]: cluster 2026-03-09T18:46:56.088156+0000 mgr.y (mgr.44107) 255 : cluster [DBG] pgmap v126: 161 pgs: 3 remapped, 1 remapped+peering, 1 unknown, 6 peering, 13 active+undersized, 9 active+undersized+degraded, 128 active+clean; 457 KiB data, 204 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s; 28/627 objects degraded (4.466%); 23 B/s, 0 objects/s recovering 2026-03-09T18:46:56.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:56 vm00 bash[69512]: cluster 2026-03-09T18:46:56.088156+0000 mgr.y (mgr.44107) 255 : cluster [DBG] pgmap v126: 161 pgs: 3 remapped, 1 remapped+peering, 1 unknown, 6 peering, 13 active+undersized, 9 active+undersized+degraded, 128 active+clean; 457 KiB data, 204 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s; 28/627 objects degraded (4.466%); 23 B/s, 0 objects/s recovering 2026-03-09T18:46:57.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:57 vm08 bash[46122]: cluster 2026-03-09T18:46:56.434759+0000 mon.a (mon.0) 401 : cluster [WRN] Health check update: Reduced data availability: 1 pg inactive, 2 pgs peering (PG_AVAILABILITY) 2026-03-09T18:46:57.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:57 vm08 bash[46122]: cluster 2026-03-09T18:46:56.434759+0000 mon.a (mon.0) 401 : cluster [WRN] Health check update: Reduced data availability: 1 pg inactive, 2 pgs peering (PG_AVAILABILITY) 2026-03-09T18:46:57.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:57 vm00 bash[65531]: cluster 2026-03-09T18:46:56.434759+0000 mon.a (mon.0) 401 : cluster [WRN] Health check update: Reduced data availability: 1 pg inactive, 2 pgs peering (PG_AVAILABILITY) 2026-03-09T18:46:57.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:57 vm00 bash[65531]: cluster 2026-03-09T18:46:56.434759+0000 mon.a (mon.0) 401 : cluster [WRN] Health check update: Reduced data availability: 1 pg inactive, 2 pgs peering (PG_AVAILABILITY) 2026-03-09T18:46:57.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:57 vm00 bash[69512]: cluster 2026-03-09T18:46:56.434759+0000 mon.a (mon.0) 401 : cluster [WRN] Health check update: Reduced data availability: 1 pg inactive, 2 pgs peering (PG_AVAILABILITY) 2026-03-09T18:46:57.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:57 vm00 bash[69512]: cluster 2026-03-09T18:46:56.434759+0000 mon.a (mon.0) 401 : cluster [WRN] Health check update: Reduced data availability: 1 pg inactive, 2 pgs peering (PG_AVAILABILITY) 2026-03-09T18:46:57.894 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:46:58.281 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:46:58.281 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (17m) 22s ago 24m 14.5M - 0.25.0 c8568f914cd2 2a8a29ecee54 2026-03-09T18:46:58.281 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (4m) 5s ago 23m 66.3M - 10.4.0 c8b91775d855 5e0e30d27ab2 2026-03-09T18:46:58.281 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (5m) 22s ago 23m 44.2M - 3.5 e1d6a67b021e ff3da66cebe9 2026-03-09T18:46:58.281 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443,9283,8765 running (5m) 5s ago 26m 465M - 19.2.3-678-ge911bdeb 654f31e6858e e51f08afe84e 2026-03-09T18:46:58.281 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:8443,9283,8765 running (14m) 22s ago 27m 530M - 19.2.3-678-ge911bdeb 654f31e6858e 838266ced7b2 2026-03-09T18:46:58.281 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (3m) 22s ago 27m 49.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e eb9fca83668a 2026-03-09T18:46:58.281 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (4m) 5s ago 27m 45.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a343d2673f4 2026-03-09T18:46:58.281 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (3m) 22s ago 27m 46.3M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 5c50cbcbef6b 2026-03-09T18:46:58.281 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (17m) 22s ago 24m 8028k - 1.7.0 72c9c2088986 c2e3e3202fde 2026-03-09T18:46:58.281 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (17m) 5s ago 24m 7963k - 1.7.0 72c9c2088986 7a7d1ed8c801 2026-03-09T18:46:58.281 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (66s) 22s ago 26m 45.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1334681baf1a 2026-03-09T18:46:58.281 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (27s) 22s ago 26m 22.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e b0cddb861a9d 2026-03-09T18:46:58.281 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (118s) 22s ago 26m 45.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9a838e294e64 2026-03-09T18:46:58.281 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (2m) 22s ago 25m 69.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 161fbb574888 2026-03-09T18:46:58.281 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (10s) 5s ago 25m 12.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7575a2bf51cd 2026-03-09T18:46:58.281 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (25m) 5s ago 25m 57.8M 4096M 17.2.0 e1d6a67b021e 28b71e1b7c1b 2026-03-09T18:46:58.281 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (25m) 5s ago 25m 55.2M 4096M 17.2.0 e1d6a67b021e 80e1a58dd2f5 2026-03-09T18:46:58.281 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (24m) 5s ago 24m 56.3M 4096M 17.2.0 e1d6a67b021e 4f91765b51cf 2026-03-09T18:46:58.281 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (5m) 5s ago 24m 43.4M - 2.51.0 1d3b7f56885b 2dc1789f7bf0 2026-03-09T18:46:58.281 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (23m) 22s ago 23m 89.4M - 17.2.0 e1d6a67b021e 671fa80b7e00 2026-03-09T18:46:58.281 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (23m) 5s ago 23m 90.2M - 17.2.0 e1d6a67b021e 1fbcce983317 2026-03-09T18:46:58.515 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:46:58.515 INFO:teuthology.orchestra.run.vm00.stdout: "mon": { 2026-03-09T18:46:58.515 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-09T18:46:58.515 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:46:58.515 INFO:teuthology.orchestra.run.vm00.stdout: "mgr": { 2026-03-09T18:46:58.515 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:46:58.515 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:46:58.515 INFO:teuthology.orchestra.run.vm00.stdout: "osd": { 2026-03-09T18:46:58.515 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3, 2026-03-09T18:46:58.515 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 5 2026-03-09T18:46:58.516 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:46:58.516 INFO:teuthology.orchestra.run.vm00.stdout: "rgw": { 2026-03-09T18:46:58.516 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-09T18:46:58.516 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:46:58.516 INFO:teuthology.orchestra.run.vm00.stdout: "overall": { 2026-03-09T18:46:58.516 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 5, 2026-03-09T18:46:58.516 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 10 2026-03-09T18:46:58.516 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:46:58.516 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:46:58.728 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:46:58.728 INFO:teuthology.orchestra.run.vm00.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T18:46:58.728 INFO:teuthology.orchestra.run.vm00.stdout: "in_progress": true, 2026-03-09T18:46:58.728 INFO:teuthology.orchestra.run.vm00.stdout: "which": "Upgrading daemons of type(s) crash,osd", 2026-03-09T18:46:58.728 INFO:teuthology.orchestra.run.vm00.stdout: "services_complete": [], 2026-03-09T18:46:58.728 INFO:teuthology.orchestra.run.vm00.stdout: "progress": "5/8 daemons upgraded", 2026-03-09T18:46:58.728 INFO:teuthology.orchestra.run.vm00.stdout: "message": "Currently upgrading osd daemons", 2026-03-09T18:46:58.728 INFO:teuthology.orchestra.run.vm00.stdout: "is_paused": false 2026-03-09T18:46:58.728 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:46:58.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:58 vm00 bash[65531]: audit 2026-03-09T18:46:57.886997+0000 mgr.y (mgr.44107) 256 : audit [DBG] from='client.44386 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:58.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:58 vm00 bash[65531]: audit 2026-03-09T18:46:57.886997+0000 mgr.y (mgr.44107) 256 : audit [DBG] from='client.44386 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:58.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:58 vm00 bash[65531]: audit 2026-03-09T18:46:58.083014+0000 mgr.y (mgr.44107) 257 : audit [DBG] from='client.44392 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:58.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:58 vm00 bash[65531]: audit 2026-03-09T18:46:58.083014+0000 mgr.y (mgr.44107) 257 : audit [DBG] from='client.44392 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:58.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:58 vm00 bash[65531]: cluster 2026-03-09T18:46:58.088543+0000 mgr.y (mgr.44107) 258 : cluster [DBG] pgmap v127: 161 pgs: 1 peering, 160 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering 2026-03-09T18:46:58.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:58 vm00 bash[65531]: cluster 2026-03-09T18:46:58.088543+0000 mgr.y (mgr.44107) 258 : cluster [DBG] pgmap v127: 161 pgs: 1 peering, 160 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering 2026-03-09T18:46:58.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:58 vm00 bash[65531]: audit 2026-03-09T18:46:58.280658+0000 mgr.y (mgr.44107) 259 : audit [DBG] from='client.44398 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:58.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:58 vm00 bash[65531]: audit 2026-03-09T18:46:58.280658+0000 mgr.y (mgr.44107) 259 : audit [DBG] from='client.44398 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:58.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:58 vm00 bash[69512]: audit 2026-03-09T18:46:57.886997+0000 mgr.y (mgr.44107) 256 : audit [DBG] from='client.44386 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:58.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:58 vm00 bash[69512]: audit 2026-03-09T18:46:57.886997+0000 mgr.y (mgr.44107) 256 : audit [DBG] from='client.44386 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:58.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:58 vm00 bash[69512]: audit 2026-03-09T18:46:58.083014+0000 mgr.y (mgr.44107) 257 : audit [DBG] from='client.44392 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:58.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:58 vm00 bash[69512]: audit 2026-03-09T18:46:58.083014+0000 mgr.y (mgr.44107) 257 : audit [DBG] from='client.44392 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:58.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:58 vm00 bash[69512]: cluster 2026-03-09T18:46:58.088543+0000 mgr.y (mgr.44107) 258 : cluster [DBG] pgmap v127: 161 pgs: 1 peering, 160 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering 2026-03-09T18:46:58.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:58 vm00 bash[69512]: cluster 2026-03-09T18:46:58.088543+0000 mgr.y (mgr.44107) 258 : cluster [DBG] pgmap v127: 161 pgs: 1 peering, 160 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering 2026-03-09T18:46:58.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:58 vm00 bash[69512]: audit 2026-03-09T18:46:58.280658+0000 mgr.y (mgr.44107) 259 : audit [DBG] from='client.44398 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:58.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:58 vm00 bash[69512]: audit 2026-03-09T18:46:58.280658+0000 mgr.y (mgr.44107) 259 : audit [DBG] from='client.44398 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:58.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:58 vm08 bash[46122]: audit 2026-03-09T18:46:57.886997+0000 mgr.y (mgr.44107) 256 : audit [DBG] from='client.44386 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:58.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:58 vm08 bash[46122]: audit 2026-03-09T18:46:57.886997+0000 mgr.y (mgr.44107) 256 : audit [DBG] from='client.44386 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:58.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:58 vm08 bash[46122]: audit 2026-03-09T18:46:58.083014+0000 mgr.y (mgr.44107) 257 : audit [DBG] from='client.44392 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:58.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:58 vm08 bash[46122]: audit 2026-03-09T18:46:58.083014+0000 mgr.y (mgr.44107) 257 : audit [DBG] from='client.44392 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:58.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:58 vm08 bash[46122]: cluster 2026-03-09T18:46:58.088543+0000 mgr.y (mgr.44107) 258 : cluster [DBG] pgmap v127: 161 pgs: 1 peering, 160 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering 2026-03-09T18:46:58.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:58 vm08 bash[46122]: cluster 2026-03-09T18:46:58.088543+0000 mgr.y (mgr.44107) 258 : cluster [DBG] pgmap v127: 161 pgs: 1 peering, 160 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering 2026-03-09T18:46:58.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:58 vm08 bash[46122]: audit 2026-03-09T18:46:58.280658+0000 mgr.y (mgr.44107) 259 : audit [DBG] from='client.44398 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:58.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:58 vm08 bash[46122]: audit 2026-03-09T18:46:58.280658+0000 mgr.y (mgr.44107) 259 : audit [DBG] from='client.44398 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: cluster 2026-03-09T18:46:58.443176+0000 mon.a (mon.0) 402 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 2 pgs peering) 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: cluster 2026-03-09T18:46:58.443176+0000 mon.a (mon.0) 402 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 2 pgs peering) 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: cluster 2026-03-09T18:46:58.443198+0000 mon.a (mon.0) 403 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 28/627 objects degraded (4.466%), 9 pgs degraded) 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: cluster 2026-03-09T18:46:58.443198+0000 mon.a (mon.0) 403 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 28/627 objects degraded (4.466%), 9 pgs degraded) 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: cluster 2026-03-09T18:46:58.443203+0000 mon.a (mon.0) 404 : cluster [INF] Cluster is now healthy 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: cluster 2026-03-09T18:46:58.443203+0000 mon.a (mon.0) 404 : cluster [INF] Cluster is now healthy 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: audit 2026-03-09T18:46:58.518777+0000 mon.c (mon.1) 276 : audit [DBG] from='client.? 192.168.123.100:0/828597763' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: audit 2026-03-09T18:46:58.518777+0000 mon.c (mon.1) 276 : audit [DBG] from='client.? 192.168.123.100:0/828597763' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: audit 2026-03-09T18:46:58.731939+0000 mgr.y (mgr.44107) 260 : audit [DBG] from='client.44410 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: audit 2026-03-09T18:46:58.731939+0000 mgr.y (mgr.44107) 260 : audit [DBG] from='client.44410 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: audit 2026-03-09T18:46:59.246560+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: audit 2026-03-09T18:46:59.246560+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: audit 2026-03-09T18:46:59.252295+0000 mon.a (mon.0) 406 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: audit 2026-03-09T18:46:59.252295+0000 mon.a (mon.0) 406 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: audit 2026-03-09T18:46:59.255925+0000 mon.c (mon.1) 277 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: audit 2026-03-09T18:46:59.255925+0000 mon.c (mon.1) 277 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: audit 2026-03-09T18:46:59.256698+0000 mon.c (mon.1) 278 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: audit 2026-03-09T18:46:59.256698+0000 mon.c (mon.1) 278 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: audit 2026-03-09T18:46:59.260975+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: audit 2026-03-09T18:46:59.260975+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: audit 2026-03-09T18:46:59.301670+0000 mon.c (mon.1) 279 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: audit 2026-03-09T18:46:59.301670+0000 mon.c (mon.1) 279 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: audit 2026-03-09T18:46:59.303074+0000 mon.c (mon.1) 280 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: audit 2026-03-09T18:46:59.303074+0000 mon.c (mon.1) 280 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: audit 2026-03-09T18:46:59.304079+0000 mon.c (mon.1) 281 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: audit 2026-03-09T18:46:59.304079+0000 mon.c (mon.1) 281 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: audit 2026-03-09T18:46:59.304898+0000 mon.c (mon.1) 282 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: audit 2026-03-09T18:46:59.304898+0000 mon.c (mon.1) 282 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: audit 2026-03-09T18:46:59.305903+0000 mon.c (mon.1) 283 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T18:46:59.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: audit 2026-03-09T18:46:59.305903+0000 mon.c (mon.1) 283 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: audit 2026-03-09T18:46:59.306044+0000 mgr.y (mgr.44107) 261 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: audit 2026-03-09T18:46:59.306044+0000 mgr.y (mgr.44107) 261 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: cephadm 2026-03-09T18:46:59.307303+0000 mgr.y (mgr.44107) 262 : cephadm [INF] Upgrade: unsafe to stop osd(s) at this time (1 PGs are or would become offline) 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:46:59 vm00 bash[69512]: cephadm 2026-03-09T18:46:59.307303+0000 mgr.y (mgr.44107) 262 : cephadm [INF] Upgrade: unsafe to stop osd(s) at this time (1 PGs are or would become offline) 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:46:59 vm00 bash[53976]: debug 2026-03-09T18:46:59.304+0000 7f991b282640 -1 mgr.server reply reply (16) Device or resource busy unsafe to stop osd(s) at this time (1 PGs are or would become offline) 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:46:59 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:46:59] "GET /metrics HTTP/1.1" 200 37843 "" "Prometheus/2.51.0" 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: cluster 2026-03-09T18:46:58.443176+0000 mon.a (mon.0) 402 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 2 pgs peering) 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: cluster 2026-03-09T18:46:58.443176+0000 mon.a (mon.0) 402 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 2 pgs peering) 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: cluster 2026-03-09T18:46:58.443198+0000 mon.a (mon.0) 403 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 28/627 objects degraded (4.466%), 9 pgs degraded) 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: cluster 2026-03-09T18:46:58.443198+0000 mon.a (mon.0) 403 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 28/627 objects degraded (4.466%), 9 pgs degraded) 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: cluster 2026-03-09T18:46:58.443203+0000 mon.a (mon.0) 404 : cluster [INF] Cluster is now healthy 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: cluster 2026-03-09T18:46:58.443203+0000 mon.a (mon.0) 404 : cluster [INF] Cluster is now healthy 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: audit 2026-03-09T18:46:58.518777+0000 mon.c (mon.1) 276 : audit [DBG] from='client.? 192.168.123.100:0/828597763' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: audit 2026-03-09T18:46:58.518777+0000 mon.c (mon.1) 276 : audit [DBG] from='client.? 192.168.123.100:0/828597763' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: audit 2026-03-09T18:46:58.731939+0000 mgr.y (mgr.44107) 260 : audit [DBG] from='client.44410 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: audit 2026-03-09T18:46:58.731939+0000 mgr.y (mgr.44107) 260 : audit [DBG] from='client.44410 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: audit 2026-03-09T18:46:59.246560+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: audit 2026-03-09T18:46:59.246560+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: audit 2026-03-09T18:46:59.252295+0000 mon.a (mon.0) 406 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: audit 2026-03-09T18:46:59.252295+0000 mon.a (mon.0) 406 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: audit 2026-03-09T18:46:59.255925+0000 mon.c (mon.1) 277 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: audit 2026-03-09T18:46:59.255925+0000 mon.c (mon.1) 277 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: audit 2026-03-09T18:46:59.256698+0000 mon.c (mon.1) 278 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: audit 2026-03-09T18:46:59.256698+0000 mon.c (mon.1) 278 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: audit 2026-03-09T18:46:59.260975+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: audit 2026-03-09T18:46:59.260975+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: audit 2026-03-09T18:46:59.301670+0000 mon.c (mon.1) 279 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: audit 2026-03-09T18:46:59.301670+0000 mon.c (mon.1) 279 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: audit 2026-03-09T18:46:59.303074+0000 mon.c (mon.1) 280 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: audit 2026-03-09T18:46:59.303074+0000 mon.c (mon.1) 280 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: audit 2026-03-09T18:46:59.304079+0000 mon.c (mon.1) 281 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: audit 2026-03-09T18:46:59.304079+0000 mon.c (mon.1) 281 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: audit 2026-03-09T18:46:59.304898+0000 mon.c (mon.1) 282 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: audit 2026-03-09T18:46:59.304898+0000 mon.c (mon.1) 282 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: audit 2026-03-09T18:46:59.305903+0000 mon.c (mon.1) 283 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: audit 2026-03-09T18:46:59.305903+0000 mon.c (mon.1) 283 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: audit 2026-03-09T18:46:59.306044+0000 mgr.y (mgr.44107) 261 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: audit 2026-03-09T18:46:59.306044+0000 mgr.y (mgr.44107) 261 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: cephadm 2026-03-09T18:46:59.307303+0000 mgr.y (mgr.44107) 262 : cephadm [INF] Upgrade: unsafe to stop osd(s) at this time (1 PGs are or would become offline) 2026-03-09T18:46:59.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:46:59 vm00 bash[65531]: cephadm 2026-03-09T18:46:59.307303+0000 mgr.y (mgr.44107) 262 : cephadm [INF] Upgrade: unsafe to stop osd(s) at this time (1 PGs are or would become offline) 2026-03-09T18:46:59.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: cluster 2026-03-09T18:46:58.443176+0000 mon.a (mon.0) 402 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 2 pgs peering) 2026-03-09T18:46:59.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: cluster 2026-03-09T18:46:58.443176+0000 mon.a (mon.0) 402 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 2 pgs peering) 2026-03-09T18:46:59.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: cluster 2026-03-09T18:46:58.443198+0000 mon.a (mon.0) 403 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 28/627 objects degraded (4.466%), 9 pgs degraded) 2026-03-09T18:46:59.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: cluster 2026-03-09T18:46:58.443198+0000 mon.a (mon.0) 403 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 28/627 objects degraded (4.466%), 9 pgs degraded) 2026-03-09T18:46:59.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: cluster 2026-03-09T18:46:58.443203+0000 mon.a (mon.0) 404 : cluster [INF] Cluster is now healthy 2026-03-09T18:46:59.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: cluster 2026-03-09T18:46:58.443203+0000 mon.a (mon.0) 404 : cluster [INF] Cluster is now healthy 2026-03-09T18:46:59.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: audit 2026-03-09T18:46:58.518777+0000 mon.c (mon.1) 276 : audit [DBG] from='client.? 192.168.123.100:0/828597763' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:59.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: audit 2026-03-09T18:46:58.518777+0000 mon.c (mon.1) 276 : audit [DBG] from='client.? 192.168.123.100:0/828597763' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:59.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: audit 2026-03-09T18:46:58.731939+0000 mgr.y (mgr.44107) 260 : audit [DBG] from='client.44410 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:59.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: audit 2026-03-09T18:46:58.731939+0000 mgr.y (mgr.44107) 260 : audit [DBG] from='client.44410 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:46:59.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: audit 2026-03-09T18:46:59.246560+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:59.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: audit 2026-03-09T18:46:59.246560+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:59.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: audit 2026-03-09T18:46:59.252295+0000 mon.a (mon.0) 406 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:59.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: audit 2026-03-09T18:46:59.252295+0000 mon.a (mon.0) 406 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:59.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: audit 2026-03-09T18:46:59.255925+0000 mon.c (mon.1) 277 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:59.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: audit 2026-03-09T18:46:59.255925+0000 mon.c (mon.1) 277 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:46:59.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: audit 2026-03-09T18:46:59.256698+0000 mon.c (mon.1) 278 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: audit 2026-03-09T18:46:59.256698+0000 mon.c (mon.1) 278 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:46:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: audit 2026-03-09T18:46:59.260975+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: audit 2026-03-09T18:46:59.260975+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:46:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: audit 2026-03-09T18:46:59.301670+0000 mon.c (mon.1) 279 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: audit 2026-03-09T18:46:59.301670+0000 mon.c (mon.1) 279 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:46:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: audit 2026-03-09T18:46:59.303074+0000 mon.c (mon.1) 280 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: audit 2026-03-09T18:46:59.303074+0000 mon.c (mon.1) 280 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: audit 2026-03-09T18:46:59.304079+0000 mon.c (mon.1) 281 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: audit 2026-03-09T18:46:59.304079+0000 mon.c (mon.1) 281 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: audit 2026-03-09T18:46:59.304898+0000 mon.c (mon.1) 282 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: audit 2026-03-09T18:46:59.304898+0000 mon.c (mon.1) 282 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:46:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: audit 2026-03-09T18:46:59.305903+0000 mon.c (mon.1) 283 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T18:46:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: audit 2026-03-09T18:46:59.305903+0000 mon.c (mon.1) 283 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T18:46:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: audit 2026-03-09T18:46:59.306044+0000 mgr.y (mgr.44107) 261 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T18:46:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: audit 2026-03-09T18:46:59.306044+0000 mgr.y (mgr.44107) 261 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T18:46:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: cephadm 2026-03-09T18:46:59.307303+0000 mgr.y (mgr.44107) 262 : cephadm [INF] Upgrade: unsafe to stop osd(s) at this time (1 PGs are or would become offline) 2026-03-09T18:46:59.975 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:46:59 vm08 bash[46122]: cephadm 2026-03-09T18:46:59.307303+0000 mgr.y (mgr.44107) 262 : cephadm [INF] Upgrade: unsafe to stop osd(s) at this time (1 PGs are or would become offline) 2026-03-09T18:47:00.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:00 vm00 bash[65531]: cluster 2026-03-09T18:47:00.088996+0000 mgr.y (mgr.44107) 263 : cluster [DBG] pgmap v128: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 660 B/s rd, 0 op/s; 14 B/s, 1 objects/s recovering 2026-03-09T18:47:00.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:00 vm00 bash[65531]: cluster 2026-03-09T18:47:00.088996+0000 mgr.y (mgr.44107) 263 : cluster [DBG] pgmap v128: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 660 B/s rd, 0 op/s; 14 B/s, 1 objects/s recovering 2026-03-09T18:47:00.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:00 vm00 bash[69512]: cluster 2026-03-09T18:47:00.088996+0000 mgr.y (mgr.44107) 263 : cluster [DBG] pgmap v128: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 660 B/s rd, 0 op/s; 14 B/s, 1 objects/s recovering 2026-03-09T18:47:00.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:00 vm00 bash[69512]: cluster 2026-03-09T18:47:00.088996+0000 mgr.y (mgr.44107) 263 : cluster [DBG] pgmap v128: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 660 B/s rd, 0 op/s; 14 B/s, 1 objects/s recovering 2026-03-09T18:47:00.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:00 vm08 bash[46122]: cluster 2026-03-09T18:47:00.088996+0000 mgr.y (mgr.44107) 263 : cluster [DBG] pgmap v128: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 660 B/s rd, 0 op/s; 14 B/s, 1 objects/s recovering 2026-03-09T18:47:00.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:00 vm08 bash[46122]: cluster 2026-03-09T18:47:00.088996+0000 mgr.y (mgr.44107) 263 : cluster [DBG] pgmap v128: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 660 B/s rd, 0 op/s; 14 B/s, 1 objects/s recovering 2026-03-09T18:47:03.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:03 vm08 bash[46122]: audit 2026-03-09T18:47:01.578340+0000 mgr.y (mgr.44107) 264 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:03.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:03 vm08 bash[46122]: audit 2026-03-09T18:47:01.578340+0000 mgr.y (mgr.44107) 264 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:03.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:03 vm08 bash[46122]: cluster 2026-03-09T18:47:02.089414+0000 mgr.y (mgr.44107) 265 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s; 12 B/s, 1 objects/s recovering 2026-03-09T18:47:03.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:03 vm08 bash[46122]: cluster 2026-03-09T18:47:02.089414+0000 mgr.y (mgr.44107) 265 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s; 12 B/s, 1 objects/s recovering 2026-03-09T18:47:03.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:03 vm08 bash[46122]: audit 2026-03-09T18:47:03.115453+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:03.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:03 vm08 bash[46122]: audit 2026-03-09T18:47:03.115453+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:03.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:03 vm08 bash[46122]: audit 2026-03-09T18:47:03.116856+0000 mon.c (mon.1) 284 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:47:03.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:03 vm08 bash[46122]: audit 2026-03-09T18:47:03.116856+0000 mon.c (mon.1) 284 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:47:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:03 vm00 bash[65531]: audit 2026-03-09T18:47:01.578340+0000 mgr.y (mgr.44107) 264 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:03 vm00 bash[65531]: audit 2026-03-09T18:47:01.578340+0000 mgr.y (mgr.44107) 264 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:03 vm00 bash[65531]: cluster 2026-03-09T18:47:02.089414+0000 mgr.y (mgr.44107) 265 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s; 12 B/s, 1 objects/s recovering 2026-03-09T18:47:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:03 vm00 bash[65531]: cluster 2026-03-09T18:47:02.089414+0000 mgr.y (mgr.44107) 265 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s; 12 B/s, 1 objects/s recovering 2026-03-09T18:47:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:03 vm00 bash[65531]: audit 2026-03-09T18:47:03.115453+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:03 vm00 bash[65531]: audit 2026-03-09T18:47:03.115453+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:03 vm00 bash[65531]: audit 2026-03-09T18:47:03.116856+0000 mon.c (mon.1) 284 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:47:03.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:03 vm00 bash[65531]: audit 2026-03-09T18:47:03.116856+0000 mon.c (mon.1) 284 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:47:03.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:03 vm00 bash[69512]: audit 2026-03-09T18:47:01.578340+0000 mgr.y (mgr.44107) 264 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:03.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:03 vm00 bash[69512]: audit 2026-03-09T18:47:01.578340+0000 mgr.y (mgr.44107) 264 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:03.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:03 vm00 bash[69512]: cluster 2026-03-09T18:47:02.089414+0000 mgr.y (mgr.44107) 265 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s; 12 B/s, 1 objects/s recovering 2026-03-09T18:47:03.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:03 vm00 bash[69512]: cluster 2026-03-09T18:47:02.089414+0000 mgr.y (mgr.44107) 265 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s; 12 B/s, 1 objects/s recovering 2026-03-09T18:47:03.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:03 vm00 bash[69512]: audit 2026-03-09T18:47:03.115453+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:03.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:03 vm00 bash[69512]: audit 2026-03-09T18:47:03.115453+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:03.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:03 vm00 bash[69512]: audit 2026-03-09T18:47:03.116856+0000 mon.c (mon.1) 284 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:47:03.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:03 vm00 bash[69512]: audit 2026-03-09T18:47:03.116856+0000 mon.c (mon.1) 284 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:47:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:05 vm00 bash[65531]: cluster 2026-03-09T18:47:04.089734+0000 mgr.y (mgr.44107) 266 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s; 10 B/s, 0 objects/s recovering 2026-03-09T18:47:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:05 vm00 bash[65531]: cluster 2026-03-09T18:47:04.089734+0000 mgr.y (mgr.44107) 266 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s; 10 B/s, 0 objects/s recovering 2026-03-09T18:47:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:05 vm00 bash[69512]: cluster 2026-03-09T18:47:04.089734+0000 mgr.y (mgr.44107) 266 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s; 10 B/s, 0 objects/s recovering 2026-03-09T18:47:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:05 vm00 bash[69512]: cluster 2026-03-09T18:47:04.089734+0000 mgr.y (mgr.44107) 266 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s; 10 B/s, 0 objects/s recovering 2026-03-09T18:47:05.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:05 vm08 bash[46122]: cluster 2026-03-09T18:47:04.089734+0000 mgr.y (mgr.44107) 266 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s; 10 B/s, 0 objects/s recovering 2026-03-09T18:47:05.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:05 vm08 bash[46122]: cluster 2026-03-09T18:47:04.089734+0000 mgr.y (mgr.44107) 266 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s; 10 B/s, 0 objects/s recovering 2026-03-09T18:47:07.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:07 vm00 bash[65531]: cluster 2026-03-09T18:47:06.090227+0000 mgr.y (mgr.44107) 267 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s; 9 B/s, 0 objects/s recovering 2026-03-09T18:47:07.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:07 vm00 bash[65531]: cluster 2026-03-09T18:47:06.090227+0000 mgr.y (mgr.44107) 267 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s; 9 B/s, 0 objects/s recovering 2026-03-09T18:47:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:07 vm00 bash[69512]: cluster 2026-03-09T18:47:06.090227+0000 mgr.y (mgr.44107) 267 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s; 9 B/s, 0 objects/s recovering 2026-03-09T18:47:07.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:07 vm00 bash[69512]: cluster 2026-03-09T18:47:06.090227+0000 mgr.y (mgr.44107) 267 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s; 9 B/s, 0 objects/s recovering 2026-03-09T18:47:07.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:07 vm08 bash[46122]: cluster 2026-03-09T18:47:06.090227+0000 mgr.y (mgr.44107) 267 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s; 9 B/s, 0 objects/s recovering 2026-03-09T18:47:07.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:07 vm08 bash[46122]: cluster 2026-03-09T18:47:06.090227+0000 mgr.y (mgr.44107) 267 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s; 9 B/s, 0 objects/s recovering 2026-03-09T18:47:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:09 vm00 bash[65531]: cluster 2026-03-09T18:47:08.090642+0000 mgr.y (mgr.44107) 268 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:47:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:09 vm00 bash[65531]: cluster 2026-03-09T18:47:08.090642+0000 mgr.y (mgr.44107) 268 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:47:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:09 vm00 bash[65531]: audit 2026-03-09T18:47:08.279688+0000 mon.a (mon.0) 409 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:09.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:09 vm00 bash[65531]: audit 2026-03-09T18:47:08.279688+0000 mon.a (mon.0) 409 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:09.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:09 vm00 bash[69512]: cluster 2026-03-09T18:47:08.090642+0000 mgr.y (mgr.44107) 268 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:47:09.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:09 vm00 bash[69512]: cluster 2026-03-09T18:47:08.090642+0000 mgr.y (mgr.44107) 268 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:47:09.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:09 vm00 bash[69512]: audit 2026-03-09T18:47:08.279688+0000 mon.a (mon.0) 409 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:09.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:09 vm00 bash[69512]: audit 2026-03-09T18:47:08.279688+0000 mon.a (mon.0) 409 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:09.629 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:47:09 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:47:09] "GET /metrics HTTP/1.1" 200 37862 "" "Prometheus/2.51.0" 2026-03-09T18:47:09.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:09 vm08 bash[46122]: cluster 2026-03-09T18:47:08.090642+0000 mgr.y (mgr.44107) 268 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:47:09.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:09 vm08 bash[46122]: cluster 2026-03-09T18:47:08.090642+0000 mgr.y (mgr.44107) 268 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:47:09.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:09 vm08 bash[46122]: audit 2026-03-09T18:47:08.279688+0000 mon.a (mon.0) 409 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:09.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:09 vm08 bash[46122]: audit 2026-03-09T18:47:08.279688+0000 mon.a (mon.0) 409 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:11.575 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:11 vm00 bash[65531]: cluster 2026-03-09T18:47:10.091057+0000 mgr.y (mgr.44107) 269 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:47:11.575 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:11 vm00 bash[65531]: cluster 2026-03-09T18:47:10.091057+0000 mgr.y (mgr.44107) 269 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:47:11.575 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:11 vm00 bash[69512]: cluster 2026-03-09T18:47:10.091057+0000 mgr.y (mgr.44107) 269 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:47:11.575 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:11 vm00 bash[69512]: cluster 2026-03-09T18:47:10.091057+0000 mgr.y (mgr.44107) 269 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:47:11.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:11 vm08 bash[46122]: cluster 2026-03-09T18:47:10.091057+0000 mgr.y (mgr.44107) 269 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:47:11.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:11 vm08 bash[46122]: cluster 2026-03-09T18:47:10.091057+0000 mgr.y (mgr.44107) 269 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:47:13.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:13 vm00 bash[65531]: audit 2026-03-09T18:47:11.578959+0000 mgr.y (mgr.44107) 270 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:13.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:13 vm00 bash[65531]: audit 2026-03-09T18:47:11.578959+0000 mgr.y (mgr.44107) 270 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:13.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:13 vm00 bash[65531]: cluster 2026-03-09T18:47:12.091428+0000 mgr.y (mgr.44107) 271 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:47:13.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:13 vm00 bash[65531]: cluster 2026-03-09T18:47:12.091428+0000 mgr.y (mgr.44107) 271 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:47:13.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:13 vm00 bash[69512]: audit 2026-03-09T18:47:11.578959+0000 mgr.y (mgr.44107) 270 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:13.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:13 vm00 bash[69512]: audit 2026-03-09T18:47:11.578959+0000 mgr.y (mgr.44107) 270 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:13.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:13 vm00 bash[69512]: cluster 2026-03-09T18:47:12.091428+0000 mgr.y (mgr.44107) 271 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:47:13.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:13 vm00 bash[69512]: cluster 2026-03-09T18:47:12.091428+0000 mgr.y (mgr.44107) 271 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:47:13.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:13 vm08 bash[46122]: audit 2026-03-09T18:47:11.578959+0000 mgr.y (mgr.44107) 270 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:13.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:13 vm08 bash[46122]: audit 2026-03-09T18:47:11.578959+0000 mgr.y (mgr.44107) 270 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:13.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:13 vm08 bash[46122]: cluster 2026-03-09T18:47:12.091428+0000 mgr.y (mgr.44107) 271 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:47:13.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:13 vm08 bash[46122]: cluster 2026-03-09T18:47:12.091428+0000 mgr.y (mgr.44107) 271 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:47:15.546 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:15 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:15.546 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:15 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:15.546 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:47:15 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:15.546 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:15 vm08 bash[46122]: cluster 2026-03-09T18:47:14.091804+0000 mgr.y (mgr.44107) 272 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:47:15.546 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:15 vm08 bash[46122]: cluster 2026-03-09T18:47:14.091804+0000 mgr.y (mgr.44107) 272 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:47:15.546 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:15 vm08 bash[46122]: audit 2026-03-09T18:47:14.317073+0000 mon.c (mon.1) 285 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T18:47:15.546 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:15 vm08 bash[46122]: audit 2026-03-09T18:47:14.317073+0000 mon.c (mon.1) 285 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T18:47:15.546 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:15 vm08 bash[46122]: audit 2026-03-09T18:47:14.317231+0000 mgr.y (mgr.44107) 273 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T18:47:15.546 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:15 vm08 bash[46122]: audit 2026-03-09T18:47:14.317231+0000 mgr.y (mgr.44107) 273 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T18:47:15.546 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:15 vm08 bash[46122]: cephadm 2026-03-09T18:47:14.317931+0000 mgr.y (mgr.44107) 274 : cephadm [INF] Upgrade: osd.5 is safe to restart 2026-03-09T18:47:15.546 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:15 vm08 bash[46122]: cephadm 2026-03-09T18:47:14.317931+0000 mgr.y (mgr.44107) 274 : cephadm [INF] Upgrade: osd.5 is safe to restart 2026-03-09T18:47:15.546 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:15 vm08 bash[46122]: audit 2026-03-09T18:47:14.718413+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:15.546 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:15 vm08 bash[46122]: audit 2026-03-09T18:47:14.718413+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:15.546 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:15 vm08 bash[46122]: audit 2026-03-09T18:47:14.723306+0000 mon.c (mon.1) 286 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T18:47:15.546 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:15 vm08 bash[46122]: audit 2026-03-09T18:47:14.723306+0000 mon.c (mon.1) 286 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T18:47:15.546 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:15 vm08 bash[46122]: audit 2026-03-09T18:47:14.723782+0000 mon.c (mon.1) 287 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:15.546 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:15 vm08 bash[46122]: audit 2026-03-09T18:47:14.723782+0000 mon.c (mon.1) 287 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:15.546 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:15 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:15.547 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:15 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:15.547 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:15 vm08 systemd[1]: Stopping Ceph osd.5 for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:47:15.547 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:47:15 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:15.547 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:47:15 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:15.547 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:47:15 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:15.547 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:47:15 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:15.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:15 vm00 bash[65531]: cluster 2026-03-09T18:47:14.091804+0000 mgr.y (mgr.44107) 272 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:47:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:15 vm00 bash[65531]: cluster 2026-03-09T18:47:14.091804+0000 mgr.y (mgr.44107) 272 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:47:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:15 vm00 bash[65531]: audit 2026-03-09T18:47:14.317073+0000 mon.c (mon.1) 285 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T18:47:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:15 vm00 bash[65531]: audit 2026-03-09T18:47:14.317073+0000 mon.c (mon.1) 285 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T18:47:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:15 vm00 bash[65531]: audit 2026-03-09T18:47:14.317231+0000 mgr.y (mgr.44107) 273 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T18:47:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:15 vm00 bash[65531]: audit 2026-03-09T18:47:14.317231+0000 mgr.y (mgr.44107) 273 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T18:47:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:15 vm00 bash[65531]: cephadm 2026-03-09T18:47:14.317931+0000 mgr.y (mgr.44107) 274 : cephadm [INF] Upgrade: osd.5 is safe to restart 2026-03-09T18:47:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:15 vm00 bash[65531]: cephadm 2026-03-09T18:47:14.317931+0000 mgr.y (mgr.44107) 274 : cephadm [INF] Upgrade: osd.5 is safe to restart 2026-03-09T18:47:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:15 vm00 bash[65531]: audit 2026-03-09T18:47:14.718413+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:15 vm00 bash[65531]: audit 2026-03-09T18:47:14.718413+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:15 vm00 bash[65531]: audit 2026-03-09T18:47:14.723306+0000 mon.c (mon.1) 286 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T18:47:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:15 vm00 bash[65531]: audit 2026-03-09T18:47:14.723306+0000 mon.c (mon.1) 286 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T18:47:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:15 vm00 bash[65531]: audit 2026-03-09T18:47:14.723782+0000 mon.c (mon.1) 287 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:15 vm00 bash[65531]: audit 2026-03-09T18:47:14.723782+0000 mon.c (mon.1) 287 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:15.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:15 vm00 bash[69512]: cluster 2026-03-09T18:47:14.091804+0000 mgr.y (mgr.44107) 272 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:47:15.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:15 vm00 bash[69512]: cluster 2026-03-09T18:47:14.091804+0000 mgr.y (mgr.44107) 272 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:47:15.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:15 vm00 bash[69512]: audit 2026-03-09T18:47:14.317073+0000 mon.c (mon.1) 285 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T18:47:15.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:15 vm00 bash[69512]: audit 2026-03-09T18:47:14.317073+0000 mon.c (mon.1) 285 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T18:47:15.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:15 vm00 bash[69512]: audit 2026-03-09T18:47:14.317231+0000 mgr.y (mgr.44107) 273 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T18:47:15.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:15 vm00 bash[69512]: audit 2026-03-09T18:47:14.317231+0000 mgr.y (mgr.44107) 273 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T18:47:15.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:15 vm00 bash[69512]: cephadm 2026-03-09T18:47:14.317931+0000 mgr.y (mgr.44107) 274 : cephadm [INF] Upgrade: osd.5 is safe to restart 2026-03-09T18:47:15.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:15 vm00 bash[69512]: cephadm 2026-03-09T18:47:14.317931+0000 mgr.y (mgr.44107) 274 : cephadm [INF] Upgrade: osd.5 is safe to restart 2026-03-09T18:47:15.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:15 vm00 bash[69512]: audit 2026-03-09T18:47:14.718413+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:15.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:15 vm00 bash[69512]: audit 2026-03-09T18:47:14.718413+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:15.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:15 vm00 bash[69512]: audit 2026-03-09T18:47:14.723306+0000 mon.c (mon.1) 286 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T18:47:15.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:15 vm00 bash[69512]: audit 2026-03-09T18:47:14.723306+0000 mon.c (mon.1) 286 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T18:47:15.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:15 vm00 bash[69512]: audit 2026-03-09T18:47:14.723782+0000 mon.c (mon.1) 287 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:15.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:15 vm00 bash[69512]: audit 2026-03-09T18:47:14.723782+0000 mon.c (mon.1) 287 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:15.974 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:15 vm08 bash[23954]: debug 2026-03-09T18:47:15.543+0000 7f0ad59d3700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.5 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T18:47:15.974 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:15 vm08 bash[23954]: debug 2026-03-09T18:47:15.543+0000 7f0ad59d3700 -1 osd.5 130 *** Got signal Terminated *** 2026-03-09T18:47:15.974 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:15 vm08 bash[23954]: debug 2026-03-09T18:47:15.543+0000 7f0ad59d3700 -1 osd.5 130 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:47:16.625 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:16 vm08 bash[58270]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-osd-5 2026-03-09T18:47:16.625 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:16 vm08 bash[46122]: cephadm 2026-03-09T18:47:14.713641+0000 mgr.y (mgr.44107) 275 : cephadm [INF] Upgrade: Updating osd.5 2026-03-09T18:47:16.625 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:16 vm08 bash[46122]: cephadm 2026-03-09T18:47:14.713641+0000 mgr.y (mgr.44107) 275 : cephadm [INF] Upgrade: Updating osd.5 2026-03-09T18:47:16.625 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:16 vm08 bash[46122]: cephadm 2026-03-09T18:47:14.725090+0000 mgr.y (mgr.44107) 276 : cephadm [INF] Deploying daemon osd.5 on vm08 2026-03-09T18:47:16.625 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:16 vm08 bash[46122]: cephadm 2026-03-09T18:47:14.725090+0000 mgr.y (mgr.44107) 276 : cephadm [INF] Deploying daemon osd.5 on vm08 2026-03-09T18:47:16.625 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:16 vm08 bash[46122]: cluster 2026-03-09T18:47:15.550674+0000 mon.a (mon.0) 411 : cluster [INF] osd.5 marked itself down and dead 2026-03-09T18:47:16.625 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:16 vm08 bash[46122]: cluster 2026-03-09T18:47:15.550674+0000 mon.a (mon.0) 411 : cluster [INF] osd.5 marked itself down and dead 2026-03-09T18:47:16.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:16 vm00 bash[65531]: cephadm 2026-03-09T18:47:14.713641+0000 mgr.y (mgr.44107) 275 : cephadm [INF] Upgrade: Updating osd.5 2026-03-09T18:47:16.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:16 vm00 bash[65531]: cephadm 2026-03-09T18:47:14.713641+0000 mgr.y (mgr.44107) 275 : cephadm [INF] Upgrade: Updating osd.5 2026-03-09T18:47:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:16 vm00 bash[65531]: cephadm 2026-03-09T18:47:14.725090+0000 mgr.y (mgr.44107) 276 : cephadm [INF] Deploying daemon osd.5 on vm08 2026-03-09T18:47:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:16 vm00 bash[65531]: cephadm 2026-03-09T18:47:14.725090+0000 mgr.y (mgr.44107) 276 : cephadm [INF] Deploying daemon osd.5 on vm08 2026-03-09T18:47:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:16 vm00 bash[65531]: cluster 2026-03-09T18:47:15.550674+0000 mon.a (mon.0) 411 : cluster [INF] osd.5 marked itself down and dead 2026-03-09T18:47:16.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:16 vm00 bash[65531]: cluster 2026-03-09T18:47:15.550674+0000 mon.a (mon.0) 411 : cluster [INF] osd.5 marked itself down and dead 2026-03-09T18:47:16.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:16 vm00 bash[69512]: cephadm 2026-03-09T18:47:14.713641+0000 mgr.y (mgr.44107) 275 : cephadm [INF] Upgrade: Updating osd.5 2026-03-09T18:47:16.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:16 vm00 bash[69512]: cephadm 2026-03-09T18:47:14.713641+0000 mgr.y (mgr.44107) 275 : cephadm [INF] Upgrade: Updating osd.5 2026-03-09T18:47:16.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:16 vm00 bash[69512]: cephadm 2026-03-09T18:47:14.725090+0000 mgr.y (mgr.44107) 276 : cephadm [INF] Deploying daemon osd.5 on vm08 2026-03-09T18:47:16.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:16 vm00 bash[69512]: cephadm 2026-03-09T18:47:14.725090+0000 mgr.y (mgr.44107) 276 : cephadm [INF] Deploying daemon osd.5 on vm08 2026-03-09T18:47:16.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:16 vm00 bash[69512]: cluster 2026-03-09T18:47:15.550674+0000 mon.a (mon.0) 411 : cluster [INF] osd.5 marked itself down and dead 2026-03-09T18:47:16.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:16 vm00 bash[69512]: cluster 2026-03-09T18:47:15.550674+0000 mon.a (mon.0) 411 : cluster [INF] osd.5 marked itself down and dead 2026-03-09T18:47:16.917 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:16 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:16.917 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:47:16 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:16.917 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:16 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:16.917 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:16 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:16.918 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:47:16 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:16.918 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:47:16 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:16.918 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:47:16 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:16.918 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:16 vm08 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.5.service: Deactivated successfully. 2026-03-09T18:47:16.918 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:16 vm08 systemd[1]: Stopped Ceph osd.5 for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:47:16.918 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:16 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:16.918 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:16 vm08 systemd[1]: Started Ceph osd.5 for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:47:16.918 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:47:16 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:17.224 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:17 vm08 bash[58475]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:47:17.224 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:17 vm08 bash[58475]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:47:17.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:17 vm00 bash[65531]: cluster 2026-03-09T18:47:16.092284+0000 mgr.y (mgr.44107) 277 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:47:17.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:17 vm00 bash[65531]: cluster 2026-03-09T18:47:16.092284+0000 mgr.y (mgr.44107) 277 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:47:17.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:17 vm00 bash[65531]: cluster 2026-03-09T18:47:16.297735+0000 mon.a (mon.0) 412 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:47:17.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:17 vm00 bash[65531]: cluster 2026-03-09T18:47:16.297735+0000 mon.a (mon.0) 412 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:47:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:17 vm00 bash[65531]: cluster 2026-03-09T18:47:16.333641+0000 mon.a (mon.0) 413 : cluster [DBG] osdmap e131: 8 total, 7 up, 8 in 2026-03-09T18:47:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:17 vm00 bash[65531]: cluster 2026-03-09T18:47:16.333641+0000 mon.a (mon.0) 413 : cluster [DBG] osdmap e131: 8 total, 7 up, 8 in 2026-03-09T18:47:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:17 vm00 bash[65531]: audit 2026-03-09T18:47:16.892532+0000 mon.a (mon.0) 414 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:17 vm00 bash[65531]: audit 2026-03-09T18:47:16.892532+0000 mon.a (mon.0) 414 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:17 vm00 bash[65531]: audit 2026-03-09T18:47:16.902907+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:17 vm00 bash[65531]: audit 2026-03-09T18:47:16.902907+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:17 vm00 bash[65531]: audit 2026-03-09T18:47:16.904156+0000 mon.c (mon.1) 288 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:17 vm00 bash[65531]: audit 2026-03-09T18:47:16.904156+0000 mon.c (mon.1) 288 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:17 vm00 bash[69512]: cluster 2026-03-09T18:47:16.092284+0000 mgr.y (mgr.44107) 277 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:47:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:17 vm00 bash[69512]: cluster 2026-03-09T18:47:16.092284+0000 mgr.y (mgr.44107) 277 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:47:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:17 vm00 bash[69512]: cluster 2026-03-09T18:47:16.297735+0000 mon.a (mon.0) 412 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:47:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:17 vm00 bash[69512]: cluster 2026-03-09T18:47:16.297735+0000 mon.a (mon.0) 412 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:47:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:17 vm00 bash[69512]: cluster 2026-03-09T18:47:16.333641+0000 mon.a (mon.0) 413 : cluster [DBG] osdmap e131: 8 total, 7 up, 8 in 2026-03-09T18:47:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:17 vm00 bash[69512]: cluster 2026-03-09T18:47:16.333641+0000 mon.a (mon.0) 413 : cluster [DBG] osdmap e131: 8 total, 7 up, 8 in 2026-03-09T18:47:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:17 vm00 bash[69512]: audit 2026-03-09T18:47:16.892532+0000 mon.a (mon.0) 414 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:17 vm00 bash[69512]: audit 2026-03-09T18:47:16.892532+0000 mon.a (mon.0) 414 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:17 vm00 bash[69512]: audit 2026-03-09T18:47:16.902907+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:17 vm00 bash[69512]: audit 2026-03-09T18:47:16.902907+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:17 vm00 bash[69512]: audit 2026-03-09T18:47:16.904156+0000 mon.c (mon.1) 288 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:17 vm00 bash[69512]: audit 2026-03-09T18:47:16.904156+0000 mon.c (mon.1) 288 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:17.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:17 vm08 bash[46122]: cluster 2026-03-09T18:47:16.092284+0000 mgr.y (mgr.44107) 277 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:47:17.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:17 vm08 bash[46122]: cluster 2026-03-09T18:47:16.092284+0000 mgr.y (mgr.44107) 277 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:47:17.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:17 vm08 bash[46122]: cluster 2026-03-09T18:47:16.297735+0000 mon.a (mon.0) 412 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:47:17.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:17 vm08 bash[46122]: cluster 2026-03-09T18:47:16.297735+0000 mon.a (mon.0) 412 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:47:17.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:17 vm08 bash[46122]: cluster 2026-03-09T18:47:16.333641+0000 mon.a (mon.0) 413 : cluster [DBG] osdmap e131: 8 total, 7 up, 8 in 2026-03-09T18:47:17.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:17 vm08 bash[46122]: cluster 2026-03-09T18:47:16.333641+0000 mon.a (mon.0) 413 : cluster [DBG] osdmap e131: 8 total, 7 up, 8 in 2026-03-09T18:47:17.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:17 vm08 bash[46122]: audit 2026-03-09T18:47:16.892532+0000 mon.a (mon.0) 414 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:17.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:17 vm08 bash[46122]: audit 2026-03-09T18:47:16.892532+0000 mon.a (mon.0) 414 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:17.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:17 vm08 bash[46122]: audit 2026-03-09T18:47:16.902907+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:17.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:17 vm08 bash[46122]: audit 2026-03-09T18:47:16.902907+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:17.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:17 vm08 bash[46122]: audit 2026-03-09T18:47:16.904156+0000 mon.c (mon.1) 288 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:17.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:17 vm08 bash[46122]: audit 2026-03-09T18:47:16.904156+0000 mon.c (mon.1) 288 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:18.224 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:17 vm08 bash[58475]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T18:47:18.224 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:17 vm08 bash[58475]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:47:18.224 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:17 vm08 bash[58475]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:47:18.224 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:17 vm08 bash[58475]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 2026-03-09T18:47:18.224 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:17 vm08 bash[58475]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-9d8c04f0-f16b-4075-94fe-a9a8b4ee6523/osd-block-c8fd35d5-49cd-4d8e-981a-afb708e47c9d --path /var/lib/ceph/osd/ceph-5 --no-mon-config 2026-03-09T18:47:18.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:18 vm00 bash[69512]: cluster 2026-03-09T18:47:17.326257+0000 mon.a (mon.0) 416 : cluster [DBG] osdmap e132: 8 total, 7 up, 8 in 2026-03-09T18:47:18.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:18 vm00 bash[69512]: cluster 2026-03-09T18:47:17.326257+0000 mon.a (mon.0) 416 : cluster [DBG] osdmap e132: 8 total, 7 up, 8 in 2026-03-09T18:47:18.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:18 vm00 bash[69512]: audit 2026-03-09T18:47:18.104428+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:18.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:18 vm00 bash[69512]: audit 2026-03-09T18:47:18.104428+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:18.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:18 vm00 bash[69512]: audit 2026-03-09T18:47:18.107620+0000 mon.c (mon.1) 289 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:47:18.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:18 vm00 bash[69512]: audit 2026-03-09T18:47:18.107620+0000 mon.c (mon.1) 289 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:47:18.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:18 vm00 bash[65531]: cluster 2026-03-09T18:47:17.326257+0000 mon.a (mon.0) 416 : cluster [DBG] osdmap e132: 8 total, 7 up, 8 in 2026-03-09T18:47:18.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:18 vm00 bash[65531]: cluster 2026-03-09T18:47:17.326257+0000 mon.a (mon.0) 416 : cluster [DBG] osdmap e132: 8 total, 7 up, 8 in 2026-03-09T18:47:18.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:18 vm00 bash[65531]: audit 2026-03-09T18:47:18.104428+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:18.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:18 vm00 bash[65531]: audit 2026-03-09T18:47:18.104428+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:18.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:18 vm00 bash[65531]: audit 2026-03-09T18:47:18.107620+0000 mon.c (mon.1) 289 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:47:18.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:18 vm00 bash[65531]: audit 2026-03-09T18:47:18.107620+0000 mon.c (mon.1) 289 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:47:18.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:18 vm08 bash[46122]: cluster 2026-03-09T18:47:17.326257+0000 mon.a (mon.0) 416 : cluster [DBG] osdmap e132: 8 total, 7 up, 8 in 2026-03-09T18:47:18.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:18 vm08 bash[46122]: cluster 2026-03-09T18:47:17.326257+0000 mon.a (mon.0) 416 : cluster [DBG] osdmap e132: 8 total, 7 up, 8 in 2026-03-09T18:47:18.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:18 vm08 bash[46122]: audit 2026-03-09T18:47:18.104428+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:18.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:18 vm08 bash[46122]: audit 2026-03-09T18:47:18.104428+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:18.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:18 vm08 bash[46122]: audit 2026-03-09T18:47:18.107620+0000 mon.c (mon.1) 289 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:47:18.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:18 vm08 bash[46122]: audit 2026-03-09T18:47:18.107620+0000 mon.c (mon.1) 289 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:47:18.724 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:18 vm08 bash[58475]: Running command: /usr/bin/ln -snf /dev/ceph-9d8c04f0-f16b-4075-94fe-a9a8b4ee6523/osd-block-c8fd35d5-49cd-4d8e-981a-afb708e47c9d /var/lib/ceph/osd/ceph-5/block 2026-03-09T18:47:18.724 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:18 vm08 bash[58475]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-5/block 2026-03-09T18:47:18.724 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:18 vm08 bash[58475]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 2026-03-09T18:47:18.724 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:18 vm08 bash[58475]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 2026-03-09T18:47:18.724 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:18 vm08 bash[58475]: --> ceph-volume lvm activate successful for osd ID: 5 2026-03-09T18:47:18.724 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:18 vm08 bash[58822]: debug 2026-03-09T18:47:18.407+0000 7f9060d3d640 1 -- 192.168.123.108:0/3827525286 <== mon.1 v2:192.168.123.100:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x562c3e29e000 con 0x562c3e243c00 2026-03-09T18:47:19.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:19 vm08 bash[46122]: cluster 2026-03-09T18:47:18.092547+0000 mgr.y (mgr.44107) 278 : cluster [DBG] pgmap v139: 161 pgs: 6 active+undersized, 11 peering, 17 stale+active+clean, 127 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:47:19.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:19 vm08 bash[46122]: cluster 2026-03-09T18:47:18.092547+0000 mgr.y (mgr.44107) 278 : cluster [DBG] pgmap v139: 161 pgs: 6 active+undersized, 11 peering, 17 stale+active+clean, 127 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:47:19.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:19 vm08 bash[46122]: cluster 2026-03-09T18:47:18.332709+0000 mon.a (mon.0) 418 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T18:47:19.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:19 vm08 bash[46122]: cluster 2026-03-09T18:47:18.332709+0000 mon.a (mon.0) 418 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T18:47:19.474 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:19 vm08 bash[58822]: debug 2026-03-09T18:47:19.099+0000 7f90635a7740 -1 Falling back to public interface 2026-03-09T18:47:19.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:19 vm00 bash[65531]: cluster 2026-03-09T18:47:18.092547+0000 mgr.y (mgr.44107) 278 : cluster [DBG] pgmap v139: 161 pgs: 6 active+undersized, 11 peering, 17 stale+active+clean, 127 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:47:19.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:19 vm00 bash[65531]: cluster 2026-03-09T18:47:18.092547+0000 mgr.y (mgr.44107) 278 : cluster [DBG] pgmap v139: 161 pgs: 6 active+undersized, 11 peering, 17 stale+active+clean, 127 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:47:19.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:19 vm00 bash[65531]: cluster 2026-03-09T18:47:18.332709+0000 mon.a (mon.0) 418 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T18:47:19.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:19 vm00 bash[65531]: cluster 2026-03-09T18:47:18.332709+0000 mon.a (mon.0) 418 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T18:47:19.629 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:47:19 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:47:19] "GET /metrics HTTP/1.1" 200 37847 "" "Prometheus/2.51.0" 2026-03-09T18:47:19.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:19 vm00 bash[69512]: cluster 2026-03-09T18:47:18.092547+0000 mgr.y (mgr.44107) 278 : cluster [DBG] pgmap v139: 161 pgs: 6 active+undersized, 11 peering, 17 stale+active+clean, 127 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:47:19.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:19 vm00 bash[69512]: cluster 2026-03-09T18:47:18.092547+0000 mgr.y (mgr.44107) 278 : cluster [DBG] pgmap v139: 161 pgs: 6 active+undersized, 11 peering, 17 stale+active+clean, 127 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:47:19.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:19 vm00 bash[69512]: cluster 2026-03-09T18:47:18.332709+0000 mon.a (mon.0) 418 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T18:47:19.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:19 vm00 bash[69512]: cluster 2026-03-09T18:47:18.332709+0000 mon.a (mon.0) 418 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T18:47:20.343 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:20 vm08 bash[58822]: debug 2026-03-09T18:47:20.063+0000 7f90635a7740 -1 osd.5 0 read_superblock omap replica is missing. 2026-03-09T18:47:20.343 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:20 vm08 bash[58822]: debug 2026-03-09T18:47:20.099+0000 7f90635a7740 -1 osd.5 131 log_to_monitors true 2026-03-09T18:47:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:20 vm00 bash[65531]: audit 2026-03-09T18:47:20.107207+0000 mon.b (mon.2) 18 : audit [INF] from='osd.5 [v2:192.168.123.108:6808/3443676585,v1:192.168.123.108:6809/3443676585]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T18:47:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:20 vm00 bash[65531]: audit 2026-03-09T18:47:20.107207+0000 mon.b (mon.2) 18 : audit [INF] from='osd.5 [v2:192.168.123.108:6808/3443676585,v1:192.168.123.108:6809/3443676585]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T18:47:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:20 vm00 bash[65531]: audit 2026-03-09T18:47:20.110370+0000 mon.a (mon.0) 419 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T18:47:20.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:20 vm00 bash[65531]: audit 2026-03-09T18:47:20.110370+0000 mon.a (mon.0) 419 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T18:47:20.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:20 vm00 bash[69512]: audit 2026-03-09T18:47:20.107207+0000 mon.b (mon.2) 18 : audit [INF] from='osd.5 [v2:192.168.123.108:6808/3443676585,v1:192.168.123.108:6809/3443676585]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T18:47:20.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:20 vm00 bash[69512]: audit 2026-03-09T18:47:20.107207+0000 mon.b (mon.2) 18 : audit [INF] from='osd.5 [v2:192.168.123.108:6808/3443676585,v1:192.168.123.108:6809/3443676585]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T18:47:20.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:20 vm00 bash[69512]: audit 2026-03-09T18:47:20.110370+0000 mon.a (mon.0) 419 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T18:47:20.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:20 vm00 bash[69512]: audit 2026-03-09T18:47:20.110370+0000 mon.a (mon.0) 419 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T18:47:20.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:20 vm08 bash[46122]: audit 2026-03-09T18:47:20.107207+0000 mon.b (mon.2) 18 : audit [INF] from='osd.5 [v2:192.168.123.108:6808/3443676585,v1:192.168.123.108:6809/3443676585]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T18:47:20.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:20 vm08 bash[46122]: audit 2026-03-09T18:47:20.107207+0000 mon.b (mon.2) 18 : audit [INF] from='osd.5 [v2:192.168.123.108:6808/3443676585,v1:192.168.123.108:6809/3443676585]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T18:47:20.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:20 vm08 bash[46122]: audit 2026-03-09T18:47:20.110370+0000 mon.a (mon.0) 419 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T18:47:20.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:20 vm08 bash[46122]: audit 2026-03-09T18:47:20.110370+0000 mon.a (mon.0) 419 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T18:47:21.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:21 vm08 bash[46122]: cluster 2026-03-09T18:47:20.093008+0000 mgr.y (mgr.44107) 279 : cluster [DBG] pgmap v140: 161 pgs: 29 active+undersized, 11 peering, 5 stale+active+clean, 11 active+undersized+degraded, 105 active+clean; 457 KiB data, 209 MiB used, 160 GiB / 160 GiB avail; 37/627 objects degraded (5.901%) 2026-03-09T18:47:21.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:21 vm08 bash[46122]: cluster 2026-03-09T18:47:20.093008+0000 mgr.y (mgr.44107) 279 : cluster [DBG] pgmap v140: 161 pgs: 29 active+undersized, 11 peering, 5 stale+active+clean, 11 active+undersized+degraded, 105 active+clean; 457 KiB data, 209 MiB used, 160 GiB / 160 GiB avail; 37/627 objects degraded (5.901%) 2026-03-09T18:47:21.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:21 vm08 bash[46122]: cluster 2026-03-09T18:47:20.344488+0000 mon.a (mon.0) 420 : cluster [WRN] Health check failed: Degraded data redundancy: 37/627 objects degraded (5.901%), 11 pgs degraded (PG_DEGRADED) 2026-03-09T18:47:21.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:21 vm08 bash[46122]: cluster 2026-03-09T18:47:20.344488+0000 mon.a (mon.0) 420 : cluster [WRN] Health check failed: Degraded data redundancy: 37/627 objects degraded (5.901%), 11 pgs degraded (PG_DEGRADED) 2026-03-09T18:47:21.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:21 vm08 bash[46122]: audit 2026-03-09T18:47:20.355406+0000 mon.a (mon.0) 421 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T18:47:21.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:21 vm08 bash[46122]: audit 2026-03-09T18:47:20.355406+0000 mon.a (mon.0) 421 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T18:47:21.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:21 vm08 bash[46122]: audit 2026-03-09T18:47:20.359376+0000 mon.b (mon.2) 19 : audit [INF] from='osd.5 [v2:192.168.123.108:6808/3443676585,v1:192.168.123.108:6809/3443676585]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:21.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:21 vm08 bash[46122]: audit 2026-03-09T18:47:20.359376+0000 mon.b (mon.2) 19 : audit [INF] from='osd.5 [v2:192.168.123.108:6808/3443676585,v1:192.168.123.108:6809/3443676585]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:21.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:21 vm08 bash[46122]: cluster 2026-03-09T18:47:20.360218+0000 mon.a (mon.0) 422 : cluster [DBG] osdmap e133: 8 total, 7 up, 8 in 2026-03-09T18:47:21.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:21 vm08 bash[46122]: cluster 2026-03-09T18:47:20.360218+0000 mon.a (mon.0) 422 : cluster [DBG] osdmap e133: 8 total, 7 up, 8 in 2026-03-09T18:47:21.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:21 vm08 bash[46122]: audit 2026-03-09T18:47:20.364888+0000 mon.a (mon.0) 423 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:21.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:21 vm08 bash[46122]: audit 2026-03-09T18:47:20.364888+0000 mon.a (mon.0) 423 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:21.474 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:21 vm08 bash[58822]: debug 2026-03-09T18:47:21.215+0000 7f905ab51640 -1 osd.5 131 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:47:21.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:21 vm00 bash[65531]: cluster 2026-03-09T18:47:20.093008+0000 mgr.y (mgr.44107) 279 : cluster [DBG] pgmap v140: 161 pgs: 29 active+undersized, 11 peering, 5 stale+active+clean, 11 active+undersized+degraded, 105 active+clean; 457 KiB data, 209 MiB used, 160 GiB / 160 GiB avail; 37/627 objects degraded (5.901%) 2026-03-09T18:47:21.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:21 vm00 bash[65531]: cluster 2026-03-09T18:47:20.093008+0000 mgr.y (mgr.44107) 279 : cluster [DBG] pgmap v140: 161 pgs: 29 active+undersized, 11 peering, 5 stale+active+clean, 11 active+undersized+degraded, 105 active+clean; 457 KiB data, 209 MiB used, 160 GiB / 160 GiB avail; 37/627 objects degraded (5.901%) 2026-03-09T18:47:21.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:21 vm00 bash[65531]: cluster 2026-03-09T18:47:20.344488+0000 mon.a (mon.0) 420 : cluster [WRN] Health check failed: Degraded data redundancy: 37/627 objects degraded (5.901%), 11 pgs degraded (PG_DEGRADED) 2026-03-09T18:47:21.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:21 vm00 bash[65531]: cluster 2026-03-09T18:47:20.344488+0000 mon.a (mon.0) 420 : cluster [WRN] Health check failed: Degraded data redundancy: 37/627 objects degraded (5.901%), 11 pgs degraded (PG_DEGRADED) 2026-03-09T18:47:21.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:21 vm00 bash[65531]: audit 2026-03-09T18:47:20.355406+0000 mon.a (mon.0) 421 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T18:47:21.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:21 vm00 bash[65531]: audit 2026-03-09T18:47:20.355406+0000 mon.a (mon.0) 421 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T18:47:21.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:21 vm00 bash[65531]: audit 2026-03-09T18:47:20.359376+0000 mon.b (mon.2) 19 : audit [INF] from='osd.5 [v2:192.168.123.108:6808/3443676585,v1:192.168.123.108:6809/3443676585]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:21.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:21 vm00 bash[65531]: audit 2026-03-09T18:47:20.359376+0000 mon.b (mon.2) 19 : audit [INF] from='osd.5 [v2:192.168.123.108:6808/3443676585,v1:192.168.123.108:6809/3443676585]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:21.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:21 vm00 bash[65531]: cluster 2026-03-09T18:47:20.360218+0000 mon.a (mon.0) 422 : cluster [DBG] osdmap e133: 8 total, 7 up, 8 in 2026-03-09T18:47:21.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:21 vm00 bash[65531]: cluster 2026-03-09T18:47:20.360218+0000 mon.a (mon.0) 422 : cluster [DBG] osdmap e133: 8 total, 7 up, 8 in 2026-03-09T18:47:21.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:21 vm00 bash[65531]: audit 2026-03-09T18:47:20.364888+0000 mon.a (mon.0) 423 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:21.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:21 vm00 bash[65531]: audit 2026-03-09T18:47:20.364888+0000 mon.a (mon.0) 423 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:21.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:21 vm00 bash[69512]: cluster 2026-03-09T18:47:20.093008+0000 mgr.y (mgr.44107) 279 : cluster [DBG] pgmap v140: 161 pgs: 29 active+undersized, 11 peering, 5 stale+active+clean, 11 active+undersized+degraded, 105 active+clean; 457 KiB data, 209 MiB used, 160 GiB / 160 GiB avail; 37/627 objects degraded (5.901%) 2026-03-09T18:47:21.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:21 vm00 bash[69512]: cluster 2026-03-09T18:47:20.093008+0000 mgr.y (mgr.44107) 279 : cluster [DBG] pgmap v140: 161 pgs: 29 active+undersized, 11 peering, 5 stale+active+clean, 11 active+undersized+degraded, 105 active+clean; 457 KiB data, 209 MiB used, 160 GiB / 160 GiB avail; 37/627 objects degraded (5.901%) 2026-03-09T18:47:21.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:21 vm00 bash[69512]: cluster 2026-03-09T18:47:20.344488+0000 mon.a (mon.0) 420 : cluster [WRN] Health check failed: Degraded data redundancy: 37/627 objects degraded (5.901%), 11 pgs degraded (PG_DEGRADED) 2026-03-09T18:47:21.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:21 vm00 bash[69512]: cluster 2026-03-09T18:47:20.344488+0000 mon.a (mon.0) 420 : cluster [WRN] Health check failed: Degraded data redundancy: 37/627 objects degraded (5.901%), 11 pgs degraded (PG_DEGRADED) 2026-03-09T18:47:21.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:21 vm00 bash[69512]: audit 2026-03-09T18:47:20.355406+0000 mon.a (mon.0) 421 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T18:47:21.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:21 vm00 bash[69512]: audit 2026-03-09T18:47:20.355406+0000 mon.a (mon.0) 421 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T18:47:21.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:21 vm00 bash[69512]: audit 2026-03-09T18:47:20.359376+0000 mon.b (mon.2) 19 : audit [INF] from='osd.5 [v2:192.168.123.108:6808/3443676585,v1:192.168.123.108:6809/3443676585]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:21.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:21 vm00 bash[69512]: audit 2026-03-09T18:47:20.359376+0000 mon.b (mon.2) 19 : audit [INF] from='osd.5 [v2:192.168.123.108:6808/3443676585,v1:192.168.123.108:6809/3443676585]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:21.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:21 vm00 bash[69512]: cluster 2026-03-09T18:47:20.360218+0000 mon.a (mon.0) 422 : cluster [DBG] osdmap e133: 8 total, 7 up, 8 in 2026-03-09T18:47:21.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:21 vm00 bash[69512]: cluster 2026-03-09T18:47:20.360218+0000 mon.a (mon.0) 422 : cluster [DBG] osdmap e133: 8 total, 7 up, 8 in 2026-03-09T18:47:21.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:21 vm00 bash[69512]: audit 2026-03-09T18:47:20.364888+0000 mon.a (mon.0) 423 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:21.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:21 vm00 bash[69512]: audit 2026-03-09T18:47:20.364888+0000 mon.a (mon.0) 423 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:22.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:22 vm00 bash[65531]: cluster 2026-03-09T18:47:21.359314+0000 mon.a (mon.0) 424 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:47:22.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:22 vm00 bash[65531]: cluster 2026-03-09T18:47:21.359314+0000 mon.a (mon.0) 424 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:47:22.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:22 vm00 bash[65531]: cluster 2026-03-09T18:47:21.376626+0000 mon.a (mon.0) 425 : cluster [INF] osd.5 [v2:192.168.123.108:6808/3443676585,v1:192.168.123.108:6809/3443676585] boot 2026-03-09T18:47:22.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:22 vm00 bash[65531]: cluster 2026-03-09T18:47:21.376626+0000 mon.a (mon.0) 425 : cluster [INF] osd.5 [v2:192.168.123.108:6808/3443676585,v1:192.168.123.108:6809/3443676585] boot 2026-03-09T18:47:22.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:22 vm00 bash[65531]: cluster 2026-03-09T18:47:21.376745+0000 mon.a (mon.0) 426 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T18:47:22.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:22 vm00 bash[65531]: cluster 2026-03-09T18:47:21.376745+0000 mon.a (mon.0) 426 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T18:47:22.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:22 vm00 bash[65531]: audit 2026-03-09T18:47:21.393998+0000 mon.c (mon.1) 290 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:47:22.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:22 vm00 bash[65531]: audit 2026-03-09T18:47:21.393998+0000 mon.c (mon.1) 290 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:47:22.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:22 vm00 bash[69512]: cluster 2026-03-09T18:47:21.359314+0000 mon.a (mon.0) 424 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:47:22.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:22 vm00 bash[69512]: cluster 2026-03-09T18:47:21.359314+0000 mon.a (mon.0) 424 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:47:22.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:22 vm00 bash[69512]: cluster 2026-03-09T18:47:21.376626+0000 mon.a (mon.0) 425 : cluster [INF] osd.5 [v2:192.168.123.108:6808/3443676585,v1:192.168.123.108:6809/3443676585] boot 2026-03-09T18:47:22.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:22 vm00 bash[69512]: cluster 2026-03-09T18:47:21.376626+0000 mon.a (mon.0) 425 : cluster [INF] osd.5 [v2:192.168.123.108:6808/3443676585,v1:192.168.123.108:6809/3443676585] boot 2026-03-09T18:47:22.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:22 vm00 bash[69512]: cluster 2026-03-09T18:47:21.376745+0000 mon.a (mon.0) 426 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T18:47:22.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:22 vm00 bash[69512]: cluster 2026-03-09T18:47:21.376745+0000 mon.a (mon.0) 426 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T18:47:22.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:22 vm00 bash[69512]: audit 2026-03-09T18:47:21.393998+0000 mon.c (mon.1) 290 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:47:22.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:22 vm00 bash[69512]: audit 2026-03-09T18:47:21.393998+0000 mon.c (mon.1) 290 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:47:22.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:22 vm08 bash[46122]: cluster 2026-03-09T18:47:21.359314+0000 mon.a (mon.0) 424 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:47:22.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:22 vm08 bash[46122]: cluster 2026-03-09T18:47:21.359314+0000 mon.a (mon.0) 424 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:47:22.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:22 vm08 bash[46122]: cluster 2026-03-09T18:47:21.376626+0000 mon.a (mon.0) 425 : cluster [INF] osd.5 [v2:192.168.123.108:6808/3443676585,v1:192.168.123.108:6809/3443676585] boot 2026-03-09T18:47:22.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:22 vm08 bash[46122]: cluster 2026-03-09T18:47:21.376626+0000 mon.a (mon.0) 425 : cluster [INF] osd.5 [v2:192.168.123.108:6808/3443676585,v1:192.168.123.108:6809/3443676585] boot 2026-03-09T18:47:22.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:22 vm08 bash[46122]: cluster 2026-03-09T18:47:21.376745+0000 mon.a (mon.0) 426 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T18:47:22.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:22 vm08 bash[46122]: cluster 2026-03-09T18:47:21.376745+0000 mon.a (mon.0) 426 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T18:47:22.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:22 vm08 bash[46122]: audit 2026-03-09T18:47:21.393998+0000 mon.c (mon.1) 290 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:47:22.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:22 vm08 bash[46122]: audit 2026-03-09T18:47:21.393998+0000 mon.c (mon.1) 290 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T18:47:23.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:23 vm08 bash[46122]: cluster 2026-03-09T18:47:21.207794+0000 osd.5 (osd.5) 1 : cluster [WRN] OSD bench result of 37585.720814 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.5. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T18:47:23.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:23 vm08 bash[46122]: cluster 2026-03-09T18:47:21.207794+0000 osd.5 (osd.5) 1 : cluster [WRN] OSD bench result of 37585.720814 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.5. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T18:47:23.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:23 vm08 bash[46122]: audit 2026-03-09T18:47:21.585618+0000 mgr.y (mgr.44107) 280 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:23.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:23 vm08 bash[46122]: audit 2026-03-09T18:47:21.585618+0000 mgr.y (mgr.44107) 280 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:23.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:23 vm08 bash[46122]: cluster 2026-03-09T18:47:22.093384+0000 mgr.y (mgr.44107) 281 : cluster [DBG] pgmap v143: 161 pgs: 41 active+undersized, 19 active+undersized+degraded, 101 active+clean; 457 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 70/627 objects degraded (11.164%) 2026-03-09T18:47:23.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:23 vm08 bash[46122]: cluster 2026-03-09T18:47:22.093384+0000 mgr.y (mgr.44107) 281 : cluster [DBG] pgmap v143: 161 pgs: 41 active+undersized, 19 active+undersized+degraded, 101 active+clean; 457 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 70/627 objects degraded (11.164%) 2026-03-09T18:47:23.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:23 vm08 bash[46122]: cluster 2026-03-09T18:47:22.374481+0000 mon.a (mon.0) 427 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T18:47:23.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:23 vm08 bash[46122]: cluster 2026-03-09T18:47:22.374481+0000 mon.a (mon.0) 427 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T18:47:23.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:23 vm08 bash[46122]: cluster 2026-03-09T18:47:22.381637+0000 mon.a (mon.0) 428 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T18:47:23.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:23 vm08 bash[46122]: cluster 2026-03-09T18:47:22.381637+0000 mon.a (mon.0) 428 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T18:47:23.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:23 vm00 bash[65531]: cluster 2026-03-09T18:47:21.207794+0000 osd.5 (osd.5) 1 : cluster [WRN] OSD bench result of 37585.720814 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.5. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T18:47:23.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:23 vm00 bash[65531]: cluster 2026-03-09T18:47:21.207794+0000 osd.5 (osd.5) 1 : cluster [WRN] OSD bench result of 37585.720814 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.5. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T18:47:23.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:23 vm00 bash[65531]: audit 2026-03-09T18:47:21.585618+0000 mgr.y (mgr.44107) 280 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:23.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:23 vm00 bash[65531]: audit 2026-03-09T18:47:21.585618+0000 mgr.y (mgr.44107) 280 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:23.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:23 vm00 bash[65531]: cluster 2026-03-09T18:47:22.093384+0000 mgr.y (mgr.44107) 281 : cluster [DBG] pgmap v143: 161 pgs: 41 active+undersized, 19 active+undersized+degraded, 101 active+clean; 457 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 70/627 objects degraded (11.164%) 2026-03-09T18:47:23.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:23 vm00 bash[65531]: cluster 2026-03-09T18:47:22.093384+0000 mgr.y (mgr.44107) 281 : cluster [DBG] pgmap v143: 161 pgs: 41 active+undersized, 19 active+undersized+degraded, 101 active+clean; 457 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 70/627 objects degraded (11.164%) 2026-03-09T18:47:23.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:23 vm00 bash[65531]: cluster 2026-03-09T18:47:22.374481+0000 mon.a (mon.0) 427 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T18:47:23.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:23 vm00 bash[65531]: cluster 2026-03-09T18:47:22.374481+0000 mon.a (mon.0) 427 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T18:47:23.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:23 vm00 bash[65531]: cluster 2026-03-09T18:47:22.381637+0000 mon.a (mon.0) 428 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T18:47:23.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:23 vm00 bash[65531]: cluster 2026-03-09T18:47:22.381637+0000 mon.a (mon.0) 428 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T18:47:23.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:23 vm00 bash[69512]: cluster 2026-03-09T18:47:21.207794+0000 osd.5 (osd.5) 1 : cluster [WRN] OSD bench result of 37585.720814 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.5. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T18:47:23.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:23 vm00 bash[69512]: cluster 2026-03-09T18:47:21.207794+0000 osd.5 (osd.5) 1 : cluster [WRN] OSD bench result of 37585.720814 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.5. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T18:47:23.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:23 vm00 bash[69512]: audit 2026-03-09T18:47:21.585618+0000 mgr.y (mgr.44107) 280 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:23.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:23 vm00 bash[69512]: audit 2026-03-09T18:47:21.585618+0000 mgr.y (mgr.44107) 280 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:23.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:23 vm00 bash[69512]: cluster 2026-03-09T18:47:22.093384+0000 mgr.y (mgr.44107) 281 : cluster [DBG] pgmap v143: 161 pgs: 41 active+undersized, 19 active+undersized+degraded, 101 active+clean; 457 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 70/627 objects degraded (11.164%) 2026-03-09T18:47:23.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:23 vm00 bash[69512]: cluster 2026-03-09T18:47:22.093384+0000 mgr.y (mgr.44107) 281 : cluster [DBG] pgmap v143: 161 pgs: 41 active+undersized, 19 active+undersized+degraded, 101 active+clean; 457 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 70/627 objects degraded (11.164%) 2026-03-09T18:47:23.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:23 vm00 bash[69512]: cluster 2026-03-09T18:47:22.374481+0000 mon.a (mon.0) 427 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T18:47:23.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:23 vm00 bash[69512]: cluster 2026-03-09T18:47:22.374481+0000 mon.a (mon.0) 427 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T18:47:23.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:23 vm00 bash[69512]: cluster 2026-03-09T18:47:22.381637+0000 mon.a (mon.0) 428 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T18:47:23.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:23 vm00 bash[69512]: cluster 2026-03-09T18:47:22.381637+0000 mon.a (mon.0) 428 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T18:47:24.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:24 vm08 bash[46122]: audit 2026-03-09T18:47:23.469039+0000 mon.a (mon.0) 429 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:24.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:24 vm08 bash[46122]: audit 2026-03-09T18:47:23.469039+0000 mon.a (mon.0) 429 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:24.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:24 vm08 bash[46122]: audit 2026-03-09T18:47:23.480580+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:24.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:24 vm08 bash[46122]: audit 2026-03-09T18:47:23.480580+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:24.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:24 vm08 bash[46122]: audit 2026-03-09T18:47:24.025076+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:24.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:24 vm08 bash[46122]: audit 2026-03-09T18:47:24.025076+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:24.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:24 vm08 bash[46122]: audit 2026-03-09T18:47:24.032731+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:24.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:24 vm08 bash[46122]: audit 2026-03-09T18:47:24.032731+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:24.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:24 vm08 bash[46122]: cluster 2026-03-09T18:47:24.093890+0000 mgr.y (mgr.44107) 282 : cluster [DBG] pgmap v145: 161 pgs: 36 active+undersized, 19 active+undersized+degraded, 106 active+clean; 457 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 70/627 objects degraded (11.164%) 2026-03-09T18:47:24.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:24 vm08 bash[46122]: cluster 2026-03-09T18:47:24.093890+0000 mgr.y (mgr.44107) 282 : cluster [DBG] pgmap v145: 161 pgs: 36 active+undersized, 19 active+undersized+degraded, 106 active+clean; 457 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 70/627 objects degraded (11.164%) 2026-03-09T18:47:24.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:24 vm00 bash[65531]: audit 2026-03-09T18:47:23.469039+0000 mon.a (mon.0) 429 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:24.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:24 vm00 bash[65531]: audit 2026-03-09T18:47:23.469039+0000 mon.a (mon.0) 429 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:24.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:24 vm00 bash[65531]: audit 2026-03-09T18:47:23.480580+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:24.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:24 vm00 bash[65531]: audit 2026-03-09T18:47:23.480580+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:24.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:24 vm00 bash[65531]: audit 2026-03-09T18:47:24.025076+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:24.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:24 vm00 bash[65531]: audit 2026-03-09T18:47:24.025076+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:24.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:24 vm00 bash[65531]: audit 2026-03-09T18:47:24.032731+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:24.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:24 vm00 bash[65531]: audit 2026-03-09T18:47:24.032731+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:24.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:24 vm00 bash[65531]: cluster 2026-03-09T18:47:24.093890+0000 mgr.y (mgr.44107) 282 : cluster [DBG] pgmap v145: 161 pgs: 36 active+undersized, 19 active+undersized+degraded, 106 active+clean; 457 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 70/627 objects degraded (11.164%) 2026-03-09T18:47:24.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:24 vm00 bash[65531]: cluster 2026-03-09T18:47:24.093890+0000 mgr.y (mgr.44107) 282 : cluster [DBG] pgmap v145: 161 pgs: 36 active+undersized, 19 active+undersized+degraded, 106 active+clean; 457 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 70/627 objects degraded (11.164%) 2026-03-09T18:47:24.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:24 vm00 bash[69512]: audit 2026-03-09T18:47:23.469039+0000 mon.a (mon.0) 429 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:24.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:24 vm00 bash[69512]: audit 2026-03-09T18:47:23.469039+0000 mon.a (mon.0) 429 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:24.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:24 vm00 bash[69512]: audit 2026-03-09T18:47:23.480580+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:24.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:24 vm00 bash[69512]: audit 2026-03-09T18:47:23.480580+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:24.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:24 vm00 bash[69512]: audit 2026-03-09T18:47:24.025076+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:24.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:24 vm00 bash[69512]: audit 2026-03-09T18:47:24.025076+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:24.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:24 vm00 bash[69512]: audit 2026-03-09T18:47:24.032731+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:24.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:24 vm00 bash[69512]: audit 2026-03-09T18:47:24.032731+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:24.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:24 vm00 bash[69512]: cluster 2026-03-09T18:47:24.093890+0000 mgr.y (mgr.44107) 282 : cluster [DBG] pgmap v145: 161 pgs: 36 active+undersized, 19 active+undersized+degraded, 106 active+clean; 457 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 70/627 objects degraded (11.164%) 2026-03-09T18:47:24.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:24 vm00 bash[69512]: cluster 2026-03-09T18:47:24.093890+0000 mgr.y (mgr.44107) 282 : cluster [DBG] pgmap v145: 161 pgs: 36 active+undersized, 19 active+undersized+degraded, 106 active+clean; 457 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 70/627 objects degraded (11.164%) 2026-03-09T18:47:26.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:26 vm08 bash[46122]: cluster 2026-03-09T18:47:26.145001+0000 mon.a (mon.0) 433 : cluster [WRN] Health check update: Degraded data redundancy: 6/627 objects degraded (0.957%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T18:47:26.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:26 vm08 bash[46122]: cluster 2026-03-09T18:47:26.145001+0000 mon.a (mon.0) 433 : cluster [WRN] Health check update: Degraded data redundancy: 6/627 objects degraded (0.957%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T18:47:26.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:26 vm00 bash[65531]: cluster 2026-03-09T18:47:26.145001+0000 mon.a (mon.0) 433 : cluster [WRN] Health check update: Degraded data redundancy: 6/627 objects degraded (0.957%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T18:47:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:26 vm00 bash[65531]: cluster 2026-03-09T18:47:26.145001+0000 mon.a (mon.0) 433 : cluster [WRN] Health check update: Degraded data redundancy: 6/627 objects degraded (0.957%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T18:47:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:26 vm00 bash[69512]: cluster 2026-03-09T18:47:26.145001+0000 mon.a (mon.0) 433 : cluster [WRN] Health check update: Degraded data redundancy: 6/627 objects degraded (0.957%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T18:47:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:26 vm00 bash[69512]: cluster 2026-03-09T18:47:26.145001+0000 mon.a (mon.0) 433 : cluster [WRN] Health check update: Degraded data redundancy: 6/627 objects degraded (0.957%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T18:47:27.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:27 vm08 bash[46122]: cluster 2026-03-09T18:47:26.094350+0000 mgr.y (mgr.44107) 283 : cluster [DBG] pgmap v146: 161 pgs: 5 active+undersized, 2 active+undersized+degraded, 154 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 6/627 objects degraded (0.957%) 2026-03-09T18:47:27.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:27 vm08 bash[46122]: cluster 2026-03-09T18:47:26.094350+0000 mgr.y (mgr.44107) 283 : cluster [DBG] pgmap v146: 161 pgs: 5 active+undersized, 2 active+undersized+degraded, 154 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 6/627 objects degraded (0.957%) 2026-03-09T18:47:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:27 vm00 bash[65531]: cluster 2026-03-09T18:47:26.094350+0000 mgr.y (mgr.44107) 283 : cluster [DBG] pgmap v146: 161 pgs: 5 active+undersized, 2 active+undersized+degraded, 154 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 6/627 objects degraded (0.957%) 2026-03-09T18:47:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:27 vm00 bash[65531]: cluster 2026-03-09T18:47:26.094350+0000 mgr.y (mgr.44107) 283 : cluster [DBG] pgmap v146: 161 pgs: 5 active+undersized, 2 active+undersized+degraded, 154 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 6/627 objects degraded (0.957%) 2026-03-09T18:47:27.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:27 vm00 bash[69512]: cluster 2026-03-09T18:47:26.094350+0000 mgr.y (mgr.44107) 283 : cluster [DBG] pgmap v146: 161 pgs: 5 active+undersized, 2 active+undersized+degraded, 154 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 6/627 objects degraded (0.957%) 2026-03-09T18:47:27.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:27 vm00 bash[69512]: cluster 2026-03-09T18:47:26.094350+0000 mgr.y (mgr.44107) 283 : cluster [DBG] pgmap v146: 161 pgs: 5 active+undersized, 2 active+undersized+degraded, 154 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 6/627 objects degraded (0.957%) 2026-03-09T18:47:28.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:28 vm00 bash[65531]: cluster 2026-03-09T18:47:28.200447+0000 mon.a (mon.0) 434 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 6/627 objects degraded (0.957%), 2 pgs degraded) 2026-03-09T18:47:28.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:28 vm00 bash[65531]: cluster 2026-03-09T18:47:28.200447+0000 mon.a (mon.0) 434 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 6/627 objects degraded (0.957%), 2 pgs degraded) 2026-03-09T18:47:28.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:28 vm00 bash[65531]: cluster 2026-03-09T18:47:28.200465+0000 mon.a (mon.0) 435 : cluster [INF] Cluster is now healthy 2026-03-09T18:47:28.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:28 vm00 bash[65531]: cluster 2026-03-09T18:47:28.200465+0000 mon.a (mon.0) 435 : cluster [INF] Cluster is now healthy 2026-03-09T18:47:28.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:28 vm00 bash[69512]: cluster 2026-03-09T18:47:28.200447+0000 mon.a (mon.0) 434 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 6/627 objects degraded (0.957%), 2 pgs degraded) 2026-03-09T18:47:28.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:28 vm00 bash[69512]: cluster 2026-03-09T18:47:28.200447+0000 mon.a (mon.0) 434 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 6/627 objects degraded (0.957%), 2 pgs degraded) 2026-03-09T18:47:28.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:28 vm00 bash[69512]: cluster 2026-03-09T18:47:28.200465+0000 mon.a (mon.0) 435 : cluster [INF] Cluster is now healthy 2026-03-09T18:47:28.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:28 vm00 bash[69512]: cluster 2026-03-09T18:47:28.200465+0000 mon.a (mon.0) 435 : cluster [INF] Cluster is now healthy 2026-03-09T18:47:28.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:28 vm08 bash[46122]: cluster 2026-03-09T18:47:28.200447+0000 mon.a (mon.0) 434 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 6/627 objects degraded (0.957%), 2 pgs degraded) 2026-03-09T18:47:28.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:28 vm08 bash[46122]: cluster 2026-03-09T18:47:28.200447+0000 mon.a (mon.0) 434 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 6/627 objects degraded (0.957%), 2 pgs degraded) 2026-03-09T18:47:28.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:28 vm08 bash[46122]: cluster 2026-03-09T18:47:28.200465+0000 mon.a (mon.0) 435 : cluster [INF] Cluster is now healthy 2026-03-09T18:47:28.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:28 vm08 bash[46122]: cluster 2026-03-09T18:47:28.200465+0000 mon.a (mon.0) 435 : cluster [INF] Cluster is now healthy 2026-03-09T18:47:28.934 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:47:29.322 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:47:29.323 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (17m) 53s ago 24m 14.5M - 0.25.0 c8568f914cd2 2a8a29ecee54 2026-03-09T18:47:29.323 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (5m) 5s ago 24m 66.4M - 10.4.0 c8b91775d855 5e0e30d27ab2 2026-03-09T18:47:29.323 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (5m) 53s ago 24m 44.2M - 3.5 e1d6a67b021e ff3da66cebe9 2026-03-09T18:47:29.323 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443,9283,8765 running (5m) 5s ago 27m 466M - 19.2.3-678-ge911bdeb 654f31e6858e e51f08afe84e 2026-03-09T18:47:29.323 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:8443,9283,8765 running (14m) 53s ago 28m 530M - 19.2.3-678-ge911bdeb 654f31e6858e 838266ced7b2 2026-03-09T18:47:29.323 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (3m) 53s ago 28m 49.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e eb9fca83668a 2026-03-09T18:47:29.323 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (4m) 5s ago 27m 46.8M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a343d2673f4 2026-03-09T18:47:29.323 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (3m) 53s ago 27m 46.3M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 5c50cbcbef6b 2026-03-09T18:47:29.323 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (17m) 53s ago 24m 8028k - 1.7.0 72c9c2088986 c2e3e3202fde 2026-03-09T18:47:29.323 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (17m) 5s ago 24m 8047k - 1.7.0 72c9c2088986 7a7d1ed8c801 2026-03-09T18:47:29.323 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (97s) 53s ago 27m 45.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1334681baf1a 2026-03-09T18:47:29.323 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (58s) 53s ago 26m 22.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e b0cddb861a9d 2026-03-09T18:47:29.323 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (2m) 53s ago 26m 45.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9a838e294e64 2026-03-09T18:47:29.323 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (2m) 53s ago 26m 69.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 161fbb574888 2026-03-09T18:47:29.323 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (41s) 5s ago 26m 49.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7575a2bf51cd 2026-03-09T18:47:29.323 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (10s) 5s ago 25m 33.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9263a2afad40 2026-03-09T18:47:29.323 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (25m) 5s ago 25m 56.0M 4096M 17.2.0 e1d6a67b021e 80e1a58dd2f5 2026-03-09T18:47:29.323 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (25m) 5s ago 25m 57.5M 4096M 17.2.0 e1d6a67b021e 4f91765b51cf 2026-03-09T18:47:29.323 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (5m) 5s ago 24m 43.5M - 2.51.0 1d3b7f56885b 2dc1789f7bf0 2026-03-09T18:47:29.323 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (24m) 53s ago 24m 89.4M - 17.2.0 e1d6a67b021e 671fa80b7e00 2026-03-09T18:47:29.323 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (24m) 5s ago 24m 90.4M - 17.2.0 e1d6a67b021e 1fbcce983317 2026-03-09T18:47:29.523 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:29 vm00 bash[65531]: cluster 2026-03-09T18:47:28.094675+0000 mgr.y (mgr.44107) 284 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 529 B/s rd, 0 op/s 2026-03-09T18:47:29.523 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:29 vm00 bash[65531]: cluster 2026-03-09T18:47:28.094675+0000 mgr.y (mgr.44107) 284 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 529 B/s rd, 0 op/s 2026-03-09T18:47:29.523 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:29 vm00 bash[69512]: cluster 2026-03-09T18:47:28.094675+0000 mgr.y (mgr.44107) 284 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 529 B/s rd, 0 op/s 2026-03-09T18:47:29.523 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:29 vm00 bash[69512]: cluster 2026-03-09T18:47:28.094675+0000 mgr.y (mgr.44107) 284 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 529 B/s rd, 0 op/s 2026-03-09T18:47:29.555 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:47:29.555 INFO:teuthology.orchestra.run.vm00.stdout: "mon": { 2026-03-09T18:47:29.555 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-09T18:47:29.556 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:47:29.556 INFO:teuthology.orchestra.run.vm00.stdout: "mgr": { 2026-03-09T18:47:29.556 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:47:29.556 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:47:29.556 INFO:teuthology.orchestra.run.vm00.stdout: "osd": { 2026-03-09T18:47:29.556 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2, 2026-03-09T18:47:29.556 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 6 2026-03-09T18:47:29.556 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:47:29.556 INFO:teuthology.orchestra.run.vm00.stdout: "rgw": { 2026-03-09T18:47:29.556 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-09T18:47:29.556 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:47:29.556 INFO:teuthology.orchestra.run.vm00.stdout: "overall": { 2026-03-09T18:47:29.556 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4, 2026-03-09T18:47:29.556 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 11 2026-03-09T18:47:29.556 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:47:29.556 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:47:29.565 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:29 vm08 bash[46122]: cluster 2026-03-09T18:47:28.094675+0000 mgr.y (mgr.44107) 284 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 529 B/s rd, 0 op/s 2026-03-09T18:47:29.565 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:29 vm08 bash[46122]: cluster 2026-03-09T18:47:28.094675+0000 mgr.y (mgr.44107) 284 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 529 B/s rd, 0 op/s 2026-03-09T18:47:29.804 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:47:29.804 INFO:teuthology.orchestra.run.vm00.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T18:47:29.804 INFO:teuthology.orchestra.run.vm00.stdout: "in_progress": true, 2026-03-09T18:47:29.804 INFO:teuthology.orchestra.run.vm00.stdout: "which": "Upgrading daemons of type(s) crash,osd", 2026-03-09T18:47:29.804 INFO:teuthology.orchestra.run.vm00.stdout: "services_complete": [], 2026-03-09T18:47:29.804 INFO:teuthology.orchestra.run.vm00.stdout: "progress": "6/8 daemons upgraded", 2026-03-09T18:47:29.804 INFO:teuthology.orchestra.run.vm00.stdout: "message": "Currently upgrading osd daemons", 2026-03-09T18:47:29.805 INFO:teuthology.orchestra.run.vm00.stdout: "is_paused": false 2026-03-09T18:47:29.805 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:47:29.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:47:29 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:47:29] "GET /metrics HTTP/1.1" 200 37847 "" "Prometheus/2.51.0" 2026-03-09T18:47:30.527 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:28.928608+0000 mgr.y (mgr.44107) 285 : audit [DBG] from='client.54414 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:47:30.527 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:28.928608+0000 mgr.y (mgr.44107) 285 : audit [DBG] from='client.54414 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:47:30.527 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:29.128370+0000 mgr.y (mgr.44107) 286 : audit [DBG] from='client.54420 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:47:30.527 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:29.128370+0000 mgr.y (mgr.44107) 286 : audit [DBG] from='client.54420 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:47:30.527 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:29.322282+0000 mgr.y (mgr.44107) 287 : audit [DBG] from='client.44428 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:47:30.527 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:29.322282+0000 mgr.y (mgr.44107) 287 : audit [DBG] from='client.44428 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:47:30.527 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:29.556170+0000 mon.b (mon.2) 20 : audit [DBG] from='client.? 192.168.123.100:0/2233267551' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:30.527 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:29.556170+0000 mon.b (mon.2) 20 : audit [DBG] from='client.? 192.168.123.100:0/2233267551' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:30.527 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:29.619861+0000 mon.a (mon.0) 436 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:30.527 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:29.619861+0000 mon.a (mon.0) 436 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:30.527 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:29.626848+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:30.527 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:29.626848+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:30.527 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:29.629321+0000 mon.c (mon.1) 291 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:30.527 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:29.629321+0000 mon.c (mon.1) 291 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:30.527 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:29.630116+0000 mon.c (mon.1) 292 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:47:30.527 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:29.630116+0000 mon.c (mon.1) 292 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:47:30.527 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:29.635387+0000 mon.a (mon.0) 438 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:30.527 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:29.635387+0000 mon.a (mon.0) 438 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:30.527 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:29.678544+0000 mon.c (mon.1) 293 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:30.527 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:29.678544+0000 mon.c (mon.1) 293 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:30.527 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:29.680230+0000 mon.c (mon.1) 294 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:30.527 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:29.680230+0000 mon.c (mon.1) 294 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:30.527 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:29.681469+0000 mon.c (mon.1) 295 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:30.527 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:29.681469+0000 mon.c (mon.1) 295 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:30.528 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:29.682821+0000 mon.c (mon.1) 296 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:30.528 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:29.682821+0000 mon.c (mon.1) 296 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:30.528 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:29.684001+0000 mon.c (mon.1) 297 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-09T18:47:30.528 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:29.684001+0000 mon.c (mon.1) 297 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-09T18:47:30.528 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:30.105484+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:30.528 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:30.105484+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:30.528 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:30.108029+0000 mon.c (mon.1) 298 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T18:47:30.528 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:30.108029+0000 mon.c (mon.1) 298 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T18:47:30.528 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:30.108929+0000 mon.c (mon.1) 299 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:30.528 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 bash[46122]: audit 2026-03-09T18:47:30.108929+0000 mon.c (mon.1) 299 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:28.928608+0000 mgr.y (mgr.44107) 285 : audit [DBG] from='client.54414 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:28.928608+0000 mgr.y (mgr.44107) 285 : audit [DBG] from='client.54414 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:29.128370+0000 mgr.y (mgr.44107) 286 : audit [DBG] from='client.54420 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:29.128370+0000 mgr.y (mgr.44107) 286 : audit [DBG] from='client.54420 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:29.322282+0000 mgr.y (mgr.44107) 287 : audit [DBG] from='client.44428 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:29.322282+0000 mgr.y (mgr.44107) 287 : audit [DBG] from='client.44428 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:29.556170+0000 mon.b (mon.2) 20 : audit [DBG] from='client.? 192.168.123.100:0/2233267551' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:29.556170+0000 mon.b (mon.2) 20 : audit [DBG] from='client.? 192.168.123.100:0/2233267551' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:29.619861+0000 mon.a (mon.0) 436 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:29.619861+0000 mon.a (mon.0) 436 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:29.626848+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:29.626848+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:29.629321+0000 mon.c (mon.1) 291 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:29.629321+0000 mon.c (mon.1) 291 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:29.630116+0000 mon.c (mon.1) 292 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:29.630116+0000 mon.c (mon.1) 292 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:29.635387+0000 mon.a (mon.0) 438 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:29.635387+0000 mon.a (mon.0) 438 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:29.678544+0000 mon.c (mon.1) 293 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:29.678544+0000 mon.c (mon.1) 293 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:29.680230+0000 mon.c (mon.1) 294 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:29.680230+0000 mon.c (mon.1) 294 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:29.681469+0000 mon.c (mon.1) 295 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:29.681469+0000 mon.c (mon.1) 295 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:29.682821+0000 mon.c (mon.1) 296 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:29.682821+0000 mon.c (mon.1) 296 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:29.684001+0000 mon.c (mon.1) 297 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:29.684001+0000 mon.c (mon.1) 297 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:30.105484+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:30.105484+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:30.108029+0000 mon.c (mon.1) 298 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:30.108029+0000 mon.c (mon.1) 298 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:30.108929+0000 mon.c (mon.1) 299 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:30.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:30 vm00 bash[65531]: audit 2026-03-09T18:47:30.108929+0000 mon.c (mon.1) 299 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:28.928608+0000 mgr.y (mgr.44107) 285 : audit [DBG] from='client.54414 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:28.928608+0000 mgr.y (mgr.44107) 285 : audit [DBG] from='client.54414 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:29.128370+0000 mgr.y (mgr.44107) 286 : audit [DBG] from='client.54420 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:29.128370+0000 mgr.y (mgr.44107) 286 : audit [DBG] from='client.54420 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:29.322282+0000 mgr.y (mgr.44107) 287 : audit [DBG] from='client.44428 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:29.322282+0000 mgr.y (mgr.44107) 287 : audit [DBG] from='client.44428 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:29.556170+0000 mon.b (mon.2) 20 : audit [DBG] from='client.? 192.168.123.100:0/2233267551' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:29.556170+0000 mon.b (mon.2) 20 : audit [DBG] from='client.? 192.168.123.100:0/2233267551' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:29.619861+0000 mon.a (mon.0) 436 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:29.619861+0000 mon.a (mon.0) 436 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:29.626848+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:29.626848+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:29.629321+0000 mon.c (mon.1) 291 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:29.629321+0000 mon.c (mon.1) 291 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:29.630116+0000 mon.c (mon.1) 292 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:29.630116+0000 mon.c (mon.1) 292 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:29.635387+0000 mon.a (mon.0) 438 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:29.635387+0000 mon.a (mon.0) 438 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:29.678544+0000 mon.c (mon.1) 293 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:29.678544+0000 mon.c (mon.1) 293 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:29.680230+0000 mon.c (mon.1) 294 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:29.680230+0000 mon.c (mon.1) 294 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:29.681469+0000 mon.c (mon.1) 295 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:29.681469+0000 mon.c (mon.1) 295 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:29.682821+0000 mon.c (mon.1) 296 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:29.682821+0000 mon.c (mon.1) 296 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:29.684001+0000 mon.c (mon.1) 297 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:29.684001+0000 mon.c (mon.1) 297 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:30.105484+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:30.105484+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:30.108029+0000 mon.c (mon.1) 298 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:30.108029+0000 mon.c (mon.1) 298 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:30.108929+0000 mon.c (mon.1) 299 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:30.630 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:30 vm00 bash[69512]: audit 2026-03-09T18:47:30.108929+0000 mon.c (mon.1) 299 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:31.224 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:30 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:31.224 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:30 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:31.224 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:30 vm08 systemd[1]: Stopping Ceph osd.6 for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:47:31.224 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:30 vm08 bash[27102]: debug 2026-03-09T18:47:30.927+0000 7f53c0c4d700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.6 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T18:47:31.224 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:30 vm08 bash[27102]: debug 2026-03-09T18:47:30.927+0000 7f53c0c4d700 -1 osd.6 135 *** Got signal Terminated *** 2026-03-09T18:47:31.224 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:30 vm08 bash[27102]: debug 2026-03-09T18:47:30.927+0000 7f53c0c4d700 -1 osd.6 135 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:47:31.224 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:47:30 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:31.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:30 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:31.225 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:30 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:31.225 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:47:30 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:31.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:47:30 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:31.225 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:47:30 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:31.225 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:47:30 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:31.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:31 vm00 bash[65531]: audit 2026-03-09T18:47:29.684293+0000 mgr.y (mgr.44107) 288 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-09T18:47:31.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:31 vm00 bash[65531]: audit 2026-03-09T18:47:29.684293+0000 mgr.y (mgr.44107) 288 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-09T18:47:31.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:31 vm00 bash[65531]: cephadm 2026-03-09T18:47:29.685327+0000 mgr.y (mgr.44107) 289 : cephadm [INF] Upgrade: osd.6 is safe to restart 2026-03-09T18:47:31.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:31 vm00 bash[65531]: cephadm 2026-03-09T18:47:29.685327+0000 mgr.y (mgr.44107) 289 : cephadm [INF] Upgrade: osd.6 is safe to restart 2026-03-09T18:47:31.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:31 vm00 bash[65531]: audit 2026-03-09T18:47:29.807823+0000 mgr.y (mgr.44107) 290 : audit [DBG] from='client.44440 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:47:31.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:31 vm00 bash[65531]: audit 2026-03-09T18:47:29.807823+0000 mgr.y (mgr.44107) 290 : audit [DBG] from='client.44440 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:47:31.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:31 vm00 bash[65531]: cluster 2026-03-09T18:47:30.095101+0000 mgr.y (mgr.44107) 291 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 470 B/s rd, 0 op/s 2026-03-09T18:47:31.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:31 vm00 bash[65531]: cluster 2026-03-09T18:47:30.095101+0000 mgr.y (mgr.44107) 291 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 470 B/s rd, 0 op/s 2026-03-09T18:47:31.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:31 vm00 bash[65531]: cephadm 2026-03-09T18:47:30.100179+0000 mgr.y (mgr.44107) 292 : cephadm [INF] Upgrade: Updating osd.6 2026-03-09T18:47:31.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:31 vm00 bash[65531]: cephadm 2026-03-09T18:47:30.100179+0000 mgr.y (mgr.44107) 292 : cephadm [INF] Upgrade: Updating osd.6 2026-03-09T18:47:31.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:31 vm00 bash[65531]: cephadm 2026-03-09T18:47:30.110903+0000 mgr.y (mgr.44107) 293 : cephadm [INF] Deploying daemon osd.6 on vm08 2026-03-09T18:47:31.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:31 vm00 bash[65531]: cephadm 2026-03-09T18:47:30.110903+0000 mgr.y (mgr.44107) 293 : cephadm [INF] Deploying daemon osd.6 on vm08 2026-03-09T18:47:31.586 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:31 vm00 bash[65531]: cluster 2026-03-09T18:47:30.932946+0000 mon.a (mon.0) 440 : cluster [INF] osd.6 marked itself down and dead 2026-03-09T18:47:31.587 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:31 vm00 bash[65531]: cluster 2026-03-09T18:47:30.932946+0000 mon.a (mon.0) 440 : cluster [INF] osd.6 marked itself down and dead 2026-03-09T18:47:31.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:31 vm00 bash[69512]: audit 2026-03-09T18:47:29.684293+0000 mgr.y (mgr.44107) 288 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-09T18:47:31.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:31 vm00 bash[69512]: audit 2026-03-09T18:47:29.684293+0000 mgr.y (mgr.44107) 288 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-09T18:47:31.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:31 vm00 bash[69512]: cephadm 2026-03-09T18:47:29.685327+0000 mgr.y (mgr.44107) 289 : cephadm [INF] Upgrade: osd.6 is safe to restart 2026-03-09T18:47:31.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:31 vm00 bash[69512]: cephadm 2026-03-09T18:47:29.685327+0000 mgr.y (mgr.44107) 289 : cephadm [INF] Upgrade: osd.6 is safe to restart 2026-03-09T18:47:31.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:31 vm00 bash[69512]: audit 2026-03-09T18:47:29.807823+0000 mgr.y (mgr.44107) 290 : audit [DBG] from='client.44440 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:47:31.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:31 vm00 bash[69512]: audit 2026-03-09T18:47:29.807823+0000 mgr.y (mgr.44107) 290 : audit [DBG] from='client.44440 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:47:31.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:31 vm00 bash[69512]: cluster 2026-03-09T18:47:30.095101+0000 mgr.y (mgr.44107) 291 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 470 B/s rd, 0 op/s 2026-03-09T18:47:31.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:31 vm00 bash[69512]: cluster 2026-03-09T18:47:30.095101+0000 mgr.y (mgr.44107) 291 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 470 B/s rd, 0 op/s 2026-03-09T18:47:31.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:31 vm00 bash[69512]: cephadm 2026-03-09T18:47:30.100179+0000 mgr.y (mgr.44107) 292 : cephadm [INF] Upgrade: Updating osd.6 2026-03-09T18:47:31.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:31 vm00 bash[69512]: cephadm 2026-03-09T18:47:30.100179+0000 mgr.y (mgr.44107) 292 : cephadm [INF] Upgrade: Updating osd.6 2026-03-09T18:47:31.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:31 vm00 bash[69512]: cephadm 2026-03-09T18:47:30.110903+0000 mgr.y (mgr.44107) 293 : cephadm [INF] Deploying daemon osd.6 on vm08 2026-03-09T18:47:31.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:31 vm00 bash[69512]: cephadm 2026-03-09T18:47:30.110903+0000 mgr.y (mgr.44107) 293 : cephadm [INF] Deploying daemon osd.6 on vm08 2026-03-09T18:47:31.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:31 vm00 bash[69512]: cluster 2026-03-09T18:47:30.932946+0000 mon.a (mon.0) 440 : cluster [INF] osd.6 marked itself down and dead 2026-03-09T18:47:31.587 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:31 vm00 bash[69512]: cluster 2026-03-09T18:47:30.932946+0000 mon.a (mon.0) 440 : cluster [INF] osd.6 marked itself down and dead 2026-03-09T18:47:31.686 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:31 vm08 bash[46122]: audit 2026-03-09T18:47:29.684293+0000 mgr.y (mgr.44107) 288 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-09T18:47:31.686 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:31 vm08 bash[46122]: audit 2026-03-09T18:47:29.684293+0000 mgr.y (mgr.44107) 288 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-09T18:47:31.686 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:31 vm08 bash[46122]: cephadm 2026-03-09T18:47:29.685327+0000 mgr.y (mgr.44107) 289 : cephadm [INF] Upgrade: osd.6 is safe to restart 2026-03-09T18:47:31.686 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:31 vm08 bash[46122]: cephadm 2026-03-09T18:47:29.685327+0000 mgr.y (mgr.44107) 289 : cephadm [INF] Upgrade: osd.6 is safe to restart 2026-03-09T18:47:31.686 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:31 vm08 bash[46122]: audit 2026-03-09T18:47:29.807823+0000 mgr.y (mgr.44107) 290 : audit [DBG] from='client.44440 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:47:31.686 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:31 vm08 bash[46122]: audit 2026-03-09T18:47:29.807823+0000 mgr.y (mgr.44107) 290 : audit [DBG] from='client.44440 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:47:31.686 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:31 vm08 bash[46122]: cluster 2026-03-09T18:47:30.095101+0000 mgr.y (mgr.44107) 291 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 470 B/s rd, 0 op/s 2026-03-09T18:47:31.686 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:31 vm08 bash[46122]: cluster 2026-03-09T18:47:30.095101+0000 mgr.y (mgr.44107) 291 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 470 B/s rd, 0 op/s 2026-03-09T18:47:31.686 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:31 vm08 bash[46122]: cephadm 2026-03-09T18:47:30.100179+0000 mgr.y (mgr.44107) 292 : cephadm [INF] Upgrade: Updating osd.6 2026-03-09T18:47:31.687 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:31 vm08 bash[46122]: cephadm 2026-03-09T18:47:30.100179+0000 mgr.y (mgr.44107) 292 : cephadm [INF] Upgrade: Updating osd.6 2026-03-09T18:47:31.687 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:31 vm08 bash[46122]: cephadm 2026-03-09T18:47:30.110903+0000 mgr.y (mgr.44107) 293 : cephadm [INF] Deploying daemon osd.6 on vm08 2026-03-09T18:47:31.687 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:31 vm08 bash[46122]: cephadm 2026-03-09T18:47:30.110903+0000 mgr.y (mgr.44107) 293 : cephadm [INF] Deploying daemon osd.6 on vm08 2026-03-09T18:47:31.687 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:31 vm08 bash[46122]: cluster 2026-03-09T18:47:30.932946+0000 mon.a (mon.0) 440 : cluster [INF] osd.6 marked itself down and dead 2026-03-09T18:47:31.687 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:31 vm08 bash[46122]: cluster 2026-03-09T18:47:30.932946+0000 mon.a (mon.0) 440 : cluster [INF] osd.6 marked itself down and dead 2026-03-09T18:47:31.964 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:31 vm08 bash[62943]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-osd-6 2026-03-09T18:47:32.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:32 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:32.224 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:47:32 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:32.224 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:32 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:32.225 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:32 vm08 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.6.service: Deactivated successfully. 2026-03-09T18:47:32.225 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:32 vm08 systemd[1]: Stopped Ceph osd.6 for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:47:32.225 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:32 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:32.225 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:32 vm08 systemd[1]: Started Ceph osd.6 for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:47:32.225 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:32 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:32.225 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:47:32 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:32.225 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:47:32 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:32.225 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:47:32 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:32.225 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:47:32 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:32.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:32 vm00 bash[65531]: cluster 2026-03-09T18:47:31.637246+0000 mon.a (mon.0) 441 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:47:32.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:32 vm00 bash[65531]: cluster 2026-03-09T18:47:31.637246+0000 mon.a (mon.0) 441 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:47:32.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:32 vm00 bash[65531]: cluster 2026-03-09T18:47:31.651448+0000 mon.a (mon.0) 442 : cluster [DBG] osdmap e136: 8 total, 7 up, 8 in 2026-03-09T18:47:32.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:32 vm00 bash[65531]: cluster 2026-03-09T18:47:31.651448+0000 mon.a (mon.0) 442 : cluster [DBG] osdmap e136: 8 total, 7 up, 8 in 2026-03-09T18:47:32.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:32 vm00 bash[65531]: audit 2026-03-09T18:47:32.232301+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:32.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:32 vm00 bash[65531]: audit 2026-03-09T18:47:32.232301+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:32.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:32 vm00 bash[65531]: audit 2026-03-09T18:47:32.238637+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:32.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:32 vm00 bash[65531]: audit 2026-03-09T18:47:32.238637+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:32.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:32 vm00 bash[65531]: audit 2026-03-09T18:47:32.240220+0000 mon.c (mon.1) 300 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:32.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:32 vm00 bash[65531]: audit 2026-03-09T18:47:32.240220+0000 mon.c (mon.1) 300 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:32.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:32 vm00 bash[69512]: cluster 2026-03-09T18:47:31.637246+0000 mon.a (mon.0) 441 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:47:32.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:32 vm00 bash[69512]: cluster 2026-03-09T18:47:31.637246+0000 mon.a (mon.0) 441 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:47:32.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:32 vm00 bash[69512]: cluster 2026-03-09T18:47:31.651448+0000 mon.a (mon.0) 442 : cluster [DBG] osdmap e136: 8 total, 7 up, 8 in 2026-03-09T18:47:32.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:32 vm00 bash[69512]: cluster 2026-03-09T18:47:31.651448+0000 mon.a (mon.0) 442 : cluster [DBG] osdmap e136: 8 total, 7 up, 8 in 2026-03-09T18:47:32.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:32 vm00 bash[69512]: audit 2026-03-09T18:47:32.232301+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:32.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:32 vm00 bash[69512]: audit 2026-03-09T18:47:32.232301+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:32.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:32 vm00 bash[69512]: audit 2026-03-09T18:47:32.238637+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:32.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:32 vm00 bash[69512]: audit 2026-03-09T18:47:32.238637+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:32.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:32 vm00 bash[69512]: audit 2026-03-09T18:47:32.240220+0000 mon.c (mon.1) 300 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:32.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:32 vm00 bash[69512]: audit 2026-03-09T18:47:32.240220+0000 mon.c (mon.1) 300 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:32.724 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:32 vm08 bash[63155]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:47:32.724 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:32 vm08 bash[63155]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:47:32.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:32 vm08 bash[46122]: cluster 2026-03-09T18:47:31.637246+0000 mon.a (mon.0) 441 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:47:32.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:32 vm08 bash[46122]: cluster 2026-03-09T18:47:31.637246+0000 mon.a (mon.0) 441 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:47:32.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:32 vm08 bash[46122]: cluster 2026-03-09T18:47:31.651448+0000 mon.a (mon.0) 442 : cluster [DBG] osdmap e136: 8 total, 7 up, 8 in 2026-03-09T18:47:32.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:32 vm08 bash[46122]: cluster 2026-03-09T18:47:31.651448+0000 mon.a (mon.0) 442 : cluster [DBG] osdmap e136: 8 total, 7 up, 8 in 2026-03-09T18:47:32.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:32 vm08 bash[46122]: audit 2026-03-09T18:47:32.232301+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:32.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:32 vm08 bash[46122]: audit 2026-03-09T18:47:32.232301+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:32.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:32 vm08 bash[46122]: audit 2026-03-09T18:47:32.238637+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:32.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:32 vm08 bash[46122]: audit 2026-03-09T18:47:32.238637+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:32.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:32 vm08 bash[46122]: audit 2026-03-09T18:47:32.240220+0000 mon.c (mon.1) 300 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:32.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:32 vm08 bash[46122]: audit 2026-03-09T18:47:32.240220+0000 mon.c (mon.1) 300 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:33.474 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:33 vm08 bash[63155]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T18:47:33.474 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:33 vm08 bash[63155]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:47:33.474 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:33 vm08 bash[63155]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:47:33.474 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:33 vm08 bash[63155]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-6 2026-03-09T18:47:33.474 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:33 vm08 bash[63155]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-d35b69ca-e0c4-4857-8340-134783fac639/osd-block-fdedf8fe-f1d9-48e7-9db9-df7cf33b1093 --path /var/lib/ceph/osd/ceph-6 --no-mon-config 2026-03-09T18:47:33.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:33 vm08 bash[46122]: audit 2026-03-09T18:47:31.589869+0000 mgr.y (mgr.44107) 294 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:33.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:33 vm08 bash[46122]: audit 2026-03-09T18:47:31.589869+0000 mgr.y (mgr.44107) 294 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:33.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:33 vm08 bash[46122]: cluster 2026-03-09T18:47:32.095392+0000 mgr.y (mgr.44107) 295 : cluster [DBG] pgmap v150: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 950 B/s rd, 0 op/s 2026-03-09T18:47:33.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:33 vm08 bash[46122]: cluster 2026-03-09T18:47:32.095392+0000 mgr.y (mgr.44107) 295 : cluster [DBG] pgmap v150: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 950 B/s rd, 0 op/s 2026-03-09T18:47:33.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:33 vm08 bash[46122]: cluster 2026-03-09T18:47:32.673895+0000 mon.a (mon.0) 445 : cluster [DBG] osdmap e137: 8 total, 7 up, 8 in 2026-03-09T18:47:33.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:33 vm08 bash[46122]: cluster 2026-03-09T18:47:32.673895+0000 mon.a (mon.0) 445 : cluster [DBG] osdmap e137: 8 total, 7 up, 8 in 2026-03-09T18:47:33.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:33 vm08 bash[46122]: audit 2026-03-09T18:47:33.100787+0000 mon.c (mon.1) 301 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:47:33.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:33 vm08 bash[46122]: audit 2026-03-09T18:47:33.100787+0000 mon.c (mon.1) 301 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:47:33.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:33 vm00 bash[65531]: audit 2026-03-09T18:47:31.589869+0000 mgr.y (mgr.44107) 294 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:33.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:33 vm00 bash[65531]: audit 2026-03-09T18:47:31.589869+0000 mgr.y (mgr.44107) 294 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:33.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:33 vm00 bash[65531]: cluster 2026-03-09T18:47:32.095392+0000 mgr.y (mgr.44107) 295 : cluster [DBG] pgmap v150: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 950 B/s rd, 0 op/s 2026-03-09T18:47:33.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:33 vm00 bash[65531]: cluster 2026-03-09T18:47:32.095392+0000 mgr.y (mgr.44107) 295 : cluster [DBG] pgmap v150: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 950 B/s rd, 0 op/s 2026-03-09T18:47:33.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:33 vm00 bash[65531]: cluster 2026-03-09T18:47:32.673895+0000 mon.a (mon.0) 445 : cluster [DBG] osdmap e137: 8 total, 7 up, 8 in 2026-03-09T18:47:33.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:33 vm00 bash[65531]: cluster 2026-03-09T18:47:32.673895+0000 mon.a (mon.0) 445 : cluster [DBG] osdmap e137: 8 total, 7 up, 8 in 2026-03-09T18:47:33.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:33 vm00 bash[65531]: audit 2026-03-09T18:47:33.100787+0000 mon.c (mon.1) 301 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:47:33.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:33 vm00 bash[65531]: audit 2026-03-09T18:47:33.100787+0000 mon.c (mon.1) 301 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:47:33.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:33 vm00 bash[69512]: audit 2026-03-09T18:47:31.589869+0000 mgr.y (mgr.44107) 294 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:33.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:33 vm00 bash[69512]: audit 2026-03-09T18:47:31.589869+0000 mgr.y (mgr.44107) 294 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:33.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:33 vm00 bash[69512]: cluster 2026-03-09T18:47:32.095392+0000 mgr.y (mgr.44107) 295 : cluster [DBG] pgmap v150: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 950 B/s rd, 0 op/s 2026-03-09T18:47:33.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:33 vm00 bash[69512]: cluster 2026-03-09T18:47:32.095392+0000 mgr.y (mgr.44107) 295 : cluster [DBG] pgmap v150: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 950 B/s rd, 0 op/s 2026-03-09T18:47:33.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:33 vm00 bash[69512]: cluster 2026-03-09T18:47:32.673895+0000 mon.a (mon.0) 445 : cluster [DBG] osdmap e137: 8 total, 7 up, 8 in 2026-03-09T18:47:33.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:33 vm00 bash[69512]: cluster 2026-03-09T18:47:32.673895+0000 mon.a (mon.0) 445 : cluster [DBG] osdmap e137: 8 total, 7 up, 8 in 2026-03-09T18:47:33.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:33 vm00 bash[69512]: audit 2026-03-09T18:47:33.100787+0000 mon.c (mon.1) 301 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:47:33.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:33 vm00 bash[69512]: audit 2026-03-09T18:47:33.100787+0000 mon.c (mon.1) 301 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:47:33.974 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:33 vm08 bash[63155]: Running command: /usr/bin/ln -snf /dev/ceph-d35b69ca-e0c4-4857-8340-134783fac639/osd-block-fdedf8fe-f1d9-48e7-9db9-df7cf33b1093 /var/lib/ceph/osd/ceph-6/block 2026-03-09T18:47:33.974 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:33 vm08 bash[63155]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-6/block 2026-03-09T18:47:33.974 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:33 vm08 bash[63155]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2 2026-03-09T18:47:33.974 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:33 vm08 bash[63155]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-6 2026-03-09T18:47:33.974 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:33 vm08 bash[63155]: --> ceph-volume lvm activate successful for osd ID: 6 2026-03-09T18:47:33.974 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:33 vm08 bash[63503]: debug 2026-03-09T18:47:33.759+0000 7f561ed2d640 1 -- 192.168.123.108:0/551836840 <== mon.2 v2:192.168.123.108:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x56208b4fd680 con 0x56208a70b800 2026-03-09T18:47:34.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:34 vm00 bash[65531]: audit 2026-03-09T18:47:33.299762+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:34.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:34 vm00 bash[65531]: audit 2026-03-09T18:47:33.299762+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:34.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:34 vm00 bash[69512]: audit 2026-03-09T18:47:33.299762+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:34.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:34 vm00 bash[69512]: audit 2026-03-09T18:47:33.299762+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:34.724 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:34 vm08 bash[63503]: debug 2026-03-09T18:47:34.447+0000 7f5621597740 -1 Falling back to public interface 2026-03-09T18:47:34.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:34 vm08 bash[46122]: audit 2026-03-09T18:47:33.299762+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:34.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:34 vm08 bash[46122]: audit 2026-03-09T18:47:33.299762+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:35.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:35 vm00 bash[65531]: cluster 2026-03-09T18:47:34.095683+0000 mgr.y (mgr.44107) 296 : cluster [DBG] pgmap v152: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:47:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:35 vm00 bash[65531]: cluster 2026-03-09T18:47:34.095683+0000 mgr.y (mgr.44107) 296 : cluster [DBG] pgmap v152: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:47:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:35 vm00 bash[69512]: cluster 2026-03-09T18:47:34.095683+0000 mgr.y (mgr.44107) 296 : cluster [DBG] pgmap v152: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:47:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:35 vm00 bash[69512]: cluster 2026-03-09T18:47:34.095683+0000 mgr.y (mgr.44107) 296 : cluster [DBG] pgmap v152: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:47:35.724 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:35 vm08 bash[63503]: debug 2026-03-09T18:47:35.403+0000 7f5621597740 -1 osd.6 0 read_superblock omap replica is missing. 2026-03-09T18:47:35.724 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:35 vm08 bash[63503]: debug 2026-03-09T18:47:35.415+0000 7f5621597740 -1 osd.6 135 log_to_monitors true 2026-03-09T18:47:35.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:35 vm08 bash[46122]: cluster 2026-03-09T18:47:34.095683+0000 mgr.y (mgr.44107) 296 : cluster [DBG] pgmap v152: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:47:35.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:35 vm08 bash[46122]: cluster 2026-03-09T18:47:34.095683+0000 mgr.y (mgr.44107) 296 : cluster [DBG] pgmap v152: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:47:36.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:36 vm00 bash[65531]: audit 2026-03-09T18:47:35.422418+0000 mon.b (mon.2) 21 : audit [INF] from='osd.6 [v2:192.168.123.108:6816/3633950698,v1:192.168.123.108:6817/3633950698]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T18:47:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:36 vm00 bash[65531]: audit 2026-03-09T18:47:35.422418+0000 mon.b (mon.2) 21 : audit [INF] from='osd.6 [v2:192.168.123.108:6816/3633950698,v1:192.168.123.108:6817/3633950698]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T18:47:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:36 vm00 bash[65531]: audit 2026-03-09T18:47:35.425612+0000 mon.a (mon.0) 447 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T18:47:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:36 vm00 bash[65531]: audit 2026-03-09T18:47:35.425612+0000 mon.a (mon.0) 447 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T18:47:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:36 vm00 bash[69512]: audit 2026-03-09T18:47:35.422418+0000 mon.b (mon.2) 21 : audit [INF] from='osd.6 [v2:192.168.123.108:6816/3633950698,v1:192.168.123.108:6817/3633950698]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T18:47:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:36 vm00 bash[69512]: audit 2026-03-09T18:47:35.422418+0000 mon.b (mon.2) 21 : audit [INF] from='osd.6 [v2:192.168.123.108:6816/3633950698,v1:192.168.123.108:6817/3633950698]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T18:47:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:36 vm00 bash[69512]: audit 2026-03-09T18:47:35.425612+0000 mon.a (mon.0) 447 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T18:47:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:36 vm00 bash[69512]: audit 2026-03-09T18:47:35.425612+0000 mon.a (mon.0) 447 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T18:47:36.724 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:36 vm08 bash[63503]: debug 2026-03-09T18:47:36.363+0000 7f5619342640 -1 osd.6 135 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:47:36.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:36 vm08 bash[46122]: audit 2026-03-09T18:47:35.422418+0000 mon.b (mon.2) 21 : audit [INF] from='osd.6 [v2:192.168.123.108:6816/3633950698,v1:192.168.123.108:6817/3633950698]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T18:47:36.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:36 vm08 bash[46122]: audit 2026-03-09T18:47:35.422418+0000 mon.b (mon.2) 21 : audit [INF] from='osd.6 [v2:192.168.123.108:6816/3633950698,v1:192.168.123.108:6817/3633950698]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T18:47:36.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:36 vm08 bash[46122]: audit 2026-03-09T18:47:35.425612+0000 mon.a (mon.0) 447 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T18:47:36.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:36 vm08 bash[46122]: audit 2026-03-09T18:47:35.425612+0000 mon.a (mon.0) 447 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T18:47:37.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:37 vm08 bash[46122]: cluster 2026-03-09T18:47:36.096263+0000 mgr.y (mgr.44107) 297 : cluster [DBG] pgmap v153: 161 pgs: 22 active+undersized, 6 stale+active+clean, 14 active+undersized+degraded, 119 active+clean; 457 KiB data, 232 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 0 op/s; 49/627 objects degraded (7.815%) 2026-03-09T18:47:37.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:37 vm08 bash[46122]: cluster 2026-03-09T18:47:36.096263+0000 mgr.y (mgr.44107) 297 : cluster [DBG] pgmap v153: 161 pgs: 22 active+undersized, 6 stale+active+clean, 14 active+undersized+degraded, 119 active+clean; 457 KiB data, 232 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 0 op/s; 49/627 objects degraded (7.815%) 2026-03-09T18:47:37.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:37 vm08 bash[46122]: cluster 2026-03-09T18:47:36.308441+0000 mon.a (mon.0) 448 : cluster [WRN] Health check failed: Degraded data redundancy: 49/627 objects degraded (7.815%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T18:47:37.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:37 vm08 bash[46122]: cluster 2026-03-09T18:47:36.308441+0000 mon.a (mon.0) 448 : cluster [WRN] Health check failed: Degraded data redundancy: 49/627 objects degraded (7.815%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T18:47:37.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:37 vm08 bash[46122]: audit 2026-03-09T18:47:36.310667+0000 mon.a (mon.0) 449 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T18:47:37.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:37 vm08 bash[46122]: audit 2026-03-09T18:47:36.310667+0000 mon.a (mon.0) 449 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T18:47:37.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:37 vm08 bash[46122]: audit 2026-03-09T18:47:36.311494+0000 mon.b (mon.2) 22 : audit [INF] from='osd.6 [v2:192.168.123.108:6816/3633950698,v1:192.168.123.108:6817/3633950698]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:37.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:37 vm08 bash[46122]: audit 2026-03-09T18:47:36.311494+0000 mon.b (mon.2) 22 : audit [INF] from='osd.6 [v2:192.168.123.108:6816/3633950698,v1:192.168.123.108:6817/3633950698]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:37.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:37 vm08 bash[46122]: cluster 2026-03-09T18:47:36.315234+0000 mon.a (mon.0) 450 : cluster [DBG] osdmap e138: 8 total, 7 up, 8 in 2026-03-09T18:47:37.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:37 vm08 bash[46122]: cluster 2026-03-09T18:47:36.315234+0000 mon.a (mon.0) 450 : cluster [DBG] osdmap e138: 8 total, 7 up, 8 in 2026-03-09T18:47:37.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:37 vm08 bash[46122]: audit 2026-03-09T18:47:36.321471+0000 mon.a (mon.0) 451 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:37.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:37 vm08 bash[46122]: audit 2026-03-09T18:47:36.321471+0000 mon.a (mon.0) 451 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:38.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:37 vm00 bash[65531]: cluster 2026-03-09T18:47:36.096263+0000 mgr.y (mgr.44107) 297 : cluster [DBG] pgmap v153: 161 pgs: 22 active+undersized, 6 stale+active+clean, 14 active+undersized+degraded, 119 active+clean; 457 KiB data, 232 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 0 op/s; 49/627 objects degraded (7.815%) 2026-03-09T18:47:38.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:37 vm00 bash[65531]: cluster 2026-03-09T18:47:36.096263+0000 mgr.y (mgr.44107) 297 : cluster [DBG] pgmap v153: 161 pgs: 22 active+undersized, 6 stale+active+clean, 14 active+undersized+degraded, 119 active+clean; 457 KiB data, 232 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 0 op/s; 49/627 objects degraded (7.815%) 2026-03-09T18:47:38.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:37 vm00 bash[65531]: cluster 2026-03-09T18:47:36.308441+0000 mon.a (mon.0) 448 : cluster [WRN] Health check failed: Degraded data redundancy: 49/627 objects degraded (7.815%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T18:47:38.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:37 vm00 bash[65531]: cluster 2026-03-09T18:47:36.308441+0000 mon.a (mon.0) 448 : cluster [WRN] Health check failed: Degraded data redundancy: 49/627 objects degraded (7.815%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T18:47:38.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:37 vm00 bash[65531]: audit 2026-03-09T18:47:36.310667+0000 mon.a (mon.0) 449 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T18:47:38.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:37 vm00 bash[65531]: audit 2026-03-09T18:47:36.310667+0000 mon.a (mon.0) 449 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T18:47:38.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:37 vm00 bash[65531]: audit 2026-03-09T18:47:36.311494+0000 mon.b (mon.2) 22 : audit [INF] from='osd.6 [v2:192.168.123.108:6816/3633950698,v1:192.168.123.108:6817/3633950698]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:38.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:37 vm00 bash[65531]: audit 2026-03-09T18:47:36.311494+0000 mon.b (mon.2) 22 : audit [INF] from='osd.6 [v2:192.168.123.108:6816/3633950698,v1:192.168.123.108:6817/3633950698]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:38.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:37 vm00 bash[65531]: cluster 2026-03-09T18:47:36.315234+0000 mon.a (mon.0) 450 : cluster [DBG] osdmap e138: 8 total, 7 up, 8 in 2026-03-09T18:47:38.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:37 vm00 bash[65531]: cluster 2026-03-09T18:47:36.315234+0000 mon.a (mon.0) 450 : cluster [DBG] osdmap e138: 8 total, 7 up, 8 in 2026-03-09T18:47:38.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:37 vm00 bash[65531]: audit 2026-03-09T18:47:36.321471+0000 mon.a (mon.0) 451 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:38.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:37 vm00 bash[65531]: audit 2026-03-09T18:47:36.321471+0000 mon.a (mon.0) 451 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:38.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:37 vm00 bash[69512]: cluster 2026-03-09T18:47:36.096263+0000 mgr.y (mgr.44107) 297 : cluster [DBG] pgmap v153: 161 pgs: 22 active+undersized, 6 stale+active+clean, 14 active+undersized+degraded, 119 active+clean; 457 KiB data, 232 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 0 op/s; 49/627 objects degraded (7.815%) 2026-03-09T18:47:38.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:37 vm00 bash[69512]: cluster 2026-03-09T18:47:36.096263+0000 mgr.y (mgr.44107) 297 : cluster [DBG] pgmap v153: 161 pgs: 22 active+undersized, 6 stale+active+clean, 14 active+undersized+degraded, 119 active+clean; 457 KiB data, 232 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 0 op/s; 49/627 objects degraded (7.815%) 2026-03-09T18:47:38.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:37 vm00 bash[69512]: cluster 2026-03-09T18:47:36.308441+0000 mon.a (mon.0) 448 : cluster [WRN] Health check failed: Degraded data redundancy: 49/627 objects degraded (7.815%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T18:47:38.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:37 vm00 bash[69512]: cluster 2026-03-09T18:47:36.308441+0000 mon.a (mon.0) 448 : cluster [WRN] Health check failed: Degraded data redundancy: 49/627 objects degraded (7.815%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T18:47:38.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:37 vm00 bash[69512]: audit 2026-03-09T18:47:36.310667+0000 mon.a (mon.0) 449 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T18:47:38.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:37 vm00 bash[69512]: audit 2026-03-09T18:47:36.310667+0000 mon.a (mon.0) 449 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T18:47:38.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:37 vm00 bash[69512]: audit 2026-03-09T18:47:36.311494+0000 mon.b (mon.2) 22 : audit [INF] from='osd.6 [v2:192.168.123.108:6816/3633950698,v1:192.168.123.108:6817/3633950698]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:38.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:37 vm00 bash[69512]: audit 2026-03-09T18:47:36.311494+0000 mon.b (mon.2) 22 : audit [INF] from='osd.6 [v2:192.168.123.108:6816/3633950698,v1:192.168.123.108:6817/3633950698]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:38.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:37 vm00 bash[69512]: cluster 2026-03-09T18:47:36.315234+0000 mon.a (mon.0) 450 : cluster [DBG] osdmap e138: 8 total, 7 up, 8 in 2026-03-09T18:47:38.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:37 vm00 bash[69512]: cluster 2026-03-09T18:47:36.315234+0000 mon.a (mon.0) 450 : cluster [DBG] osdmap e138: 8 total, 7 up, 8 in 2026-03-09T18:47:38.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:37 vm00 bash[69512]: audit 2026-03-09T18:47:36.321471+0000 mon.a (mon.0) 451 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:38.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:37 vm00 bash[69512]: audit 2026-03-09T18:47:36.321471+0000 mon.a (mon.0) 451 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:38.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:38 vm08 bash[46122]: cluster 2026-03-09T18:47:37.323735+0000 mon.a (mon.0) 452 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:47:38.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:38 vm08 bash[46122]: cluster 2026-03-09T18:47:37.323735+0000 mon.a (mon.0) 452 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:47:38.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:38 vm08 bash[46122]: cluster 2026-03-09T18:47:37.516282+0000 mon.a (mon.0) 453 : cluster [INF] osd.6 [v2:192.168.123.108:6816/3633950698,v1:192.168.123.108:6817/3633950698] boot 2026-03-09T18:47:38.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:38 vm08 bash[46122]: cluster 2026-03-09T18:47:37.516282+0000 mon.a (mon.0) 453 : cluster [INF] osd.6 [v2:192.168.123.108:6816/3633950698,v1:192.168.123.108:6817/3633950698] boot 2026-03-09T18:47:38.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:38 vm08 bash[46122]: cluster 2026-03-09T18:47:37.516335+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T18:47:38.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:38 vm08 bash[46122]: cluster 2026-03-09T18:47:37.516335+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T18:47:38.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:38 vm08 bash[46122]: audit 2026-03-09T18:47:37.633264+0000 mon.c (mon.1) 302 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:47:38.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:38 vm08 bash[46122]: audit 2026-03-09T18:47:37.633264+0000 mon.c (mon.1) 302 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:47:38.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:38 vm08 bash[46122]: cluster 2026-03-09T18:47:38.096708+0000 mgr.y (mgr.44107) 298 : cluster [DBG] pgmap v156: 161 pgs: 33 active+undersized, 20 active+undersized+degraded, 108 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-09T18:47:38.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:38 vm08 bash[46122]: cluster 2026-03-09T18:47:38.096708+0000 mgr.y (mgr.44107) 298 : cluster [DBG] pgmap v156: 161 pgs: 33 active+undersized, 20 active+undersized+degraded, 108 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-09T18:47:38.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:38 vm08 bash[46122]: cluster 2026-03-09T18:47:38.505420+0000 mon.a (mon.0) 455 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-09T18:47:38.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:38 vm08 bash[46122]: cluster 2026-03-09T18:47:38.505420+0000 mon.a (mon.0) 455 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-09T18:47:38.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:38 vm08 bash[46122]: audit 2026-03-09T18:47:38.636046+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:38.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:38 vm08 bash[46122]: audit 2026-03-09T18:47:38.636046+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:39.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:38 vm00 bash[65531]: cluster 2026-03-09T18:47:37.323735+0000 mon.a (mon.0) 452 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:47:39.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:38 vm00 bash[65531]: cluster 2026-03-09T18:47:37.323735+0000 mon.a (mon.0) 452 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:47:39.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:38 vm00 bash[65531]: cluster 2026-03-09T18:47:37.516282+0000 mon.a (mon.0) 453 : cluster [INF] osd.6 [v2:192.168.123.108:6816/3633950698,v1:192.168.123.108:6817/3633950698] boot 2026-03-09T18:47:39.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:38 vm00 bash[65531]: cluster 2026-03-09T18:47:37.516282+0000 mon.a (mon.0) 453 : cluster [INF] osd.6 [v2:192.168.123.108:6816/3633950698,v1:192.168.123.108:6817/3633950698] boot 2026-03-09T18:47:39.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:38 vm00 bash[65531]: cluster 2026-03-09T18:47:37.516335+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T18:47:39.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:38 vm00 bash[65531]: cluster 2026-03-09T18:47:37.516335+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T18:47:39.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:38 vm00 bash[65531]: audit 2026-03-09T18:47:37.633264+0000 mon.c (mon.1) 302 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:47:39.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:38 vm00 bash[65531]: audit 2026-03-09T18:47:37.633264+0000 mon.c (mon.1) 302 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:47:39.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:38 vm00 bash[65531]: cluster 2026-03-09T18:47:38.096708+0000 mgr.y (mgr.44107) 298 : cluster [DBG] pgmap v156: 161 pgs: 33 active+undersized, 20 active+undersized+degraded, 108 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-09T18:47:39.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:38 vm00 bash[65531]: cluster 2026-03-09T18:47:38.096708+0000 mgr.y (mgr.44107) 298 : cluster [DBG] pgmap v156: 161 pgs: 33 active+undersized, 20 active+undersized+degraded, 108 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-09T18:47:39.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:38 vm00 bash[65531]: cluster 2026-03-09T18:47:38.505420+0000 mon.a (mon.0) 455 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-09T18:47:39.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:38 vm00 bash[65531]: cluster 2026-03-09T18:47:38.505420+0000 mon.a (mon.0) 455 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-09T18:47:39.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:38 vm00 bash[65531]: audit 2026-03-09T18:47:38.636046+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:39.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:38 vm00 bash[65531]: audit 2026-03-09T18:47:38.636046+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:39.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:38 vm00 bash[69512]: cluster 2026-03-09T18:47:37.323735+0000 mon.a (mon.0) 452 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:47:39.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:38 vm00 bash[69512]: cluster 2026-03-09T18:47:37.323735+0000 mon.a (mon.0) 452 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:47:39.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:38 vm00 bash[69512]: cluster 2026-03-09T18:47:37.516282+0000 mon.a (mon.0) 453 : cluster [INF] osd.6 [v2:192.168.123.108:6816/3633950698,v1:192.168.123.108:6817/3633950698] boot 2026-03-09T18:47:39.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:38 vm00 bash[69512]: cluster 2026-03-09T18:47:37.516282+0000 mon.a (mon.0) 453 : cluster [INF] osd.6 [v2:192.168.123.108:6816/3633950698,v1:192.168.123.108:6817/3633950698] boot 2026-03-09T18:47:39.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:38 vm00 bash[69512]: cluster 2026-03-09T18:47:37.516335+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T18:47:39.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:38 vm00 bash[69512]: cluster 2026-03-09T18:47:37.516335+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T18:47:39.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:38 vm00 bash[69512]: audit 2026-03-09T18:47:37.633264+0000 mon.c (mon.1) 302 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:47:39.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:38 vm00 bash[69512]: audit 2026-03-09T18:47:37.633264+0000 mon.c (mon.1) 302 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T18:47:39.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:38 vm00 bash[69512]: cluster 2026-03-09T18:47:38.096708+0000 mgr.y (mgr.44107) 298 : cluster [DBG] pgmap v156: 161 pgs: 33 active+undersized, 20 active+undersized+degraded, 108 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-09T18:47:39.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:38 vm00 bash[69512]: cluster 2026-03-09T18:47:38.096708+0000 mgr.y (mgr.44107) 298 : cluster [DBG] pgmap v156: 161 pgs: 33 active+undersized, 20 active+undersized+degraded, 108 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-09T18:47:39.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:38 vm00 bash[69512]: cluster 2026-03-09T18:47:38.505420+0000 mon.a (mon.0) 455 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-09T18:47:39.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:38 vm00 bash[69512]: cluster 2026-03-09T18:47:38.505420+0000 mon.a (mon.0) 455 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-09T18:47:39.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:38 vm00 bash[69512]: audit 2026-03-09T18:47:38.636046+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:39.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:38 vm00 bash[69512]: audit 2026-03-09T18:47:38.636046+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:39.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:39 vm00 bash[65531]: audit 2026-03-09T18:47:38.643734+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:39.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:39 vm00 bash[65531]: audit 2026-03-09T18:47:38.643734+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:39.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:39 vm00 bash[65531]: audit 2026-03-09T18:47:39.182425+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:39.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:39 vm00 bash[65531]: audit 2026-03-09T18:47:39.182425+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:39.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:39 vm00 bash[65531]: audit 2026-03-09T18:47:39.187947+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:39.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:39 vm00 bash[65531]: audit 2026-03-09T18:47:39.187947+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:39.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:39 vm00 bash[69512]: audit 2026-03-09T18:47:38.643734+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:39.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:39 vm00 bash[69512]: audit 2026-03-09T18:47:38.643734+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:39.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:39 vm00 bash[69512]: audit 2026-03-09T18:47:39.182425+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:39.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:39 vm00 bash[69512]: audit 2026-03-09T18:47:39.182425+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:39.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:39 vm00 bash[69512]: audit 2026-03-09T18:47:39.187947+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:39.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:39 vm00 bash[69512]: audit 2026-03-09T18:47:39.187947+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:39.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:47:39 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:47:39] "GET /metrics HTTP/1.1" 200 37862 "" "Prometheus/2.51.0" 2026-03-09T18:47:39.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:39 vm08 bash[46122]: audit 2026-03-09T18:47:38.643734+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:39.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:39 vm08 bash[46122]: audit 2026-03-09T18:47:38.643734+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:39.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:39 vm08 bash[46122]: audit 2026-03-09T18:47:39.182425+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:39.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:39 vm08 bash[46122]: audit 2026-03-09T18:47:39.182425+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:39.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:39 vm08 bash[46122]: audit 2026-03-09T18:47:39.187947+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:39.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:39 vm08 bash[46122]: audit 2026-03-09T18:47:39.187947+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:40.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:40 vm08 bash[46122]: cluster 2026-03-09T18:47:40.097074+0000 mgr.y (mgr.44107) 299 : cluster [DBG] pgmap v158: 161 pgs: 25 active+undersized, 15 active+undersized+degraded, 121 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail; 51/627 objects degraded (8.134%) 2026-03-09T18:47:40.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:40 vm08 bash[46122]: cluster 2026-03-09T18:47:40.097074+0000 mgr.y (mgr.44107) 299 : cluster [DBG] pgmap v158: 161 pgs: 25 active+undersized, 15 active+undersized+degraded, 121 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail; 51/627 objects degraded (8.134%) 2026-03-09T18:47:41.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:40 vm00 bash[65531]: cluster 2026-03-09T18:47:40.097074+0000 mgr.y (mgr.44107) 299 : cluster [DBG] pgmap v158: 161 pgs: 25 active+undersized, 15 active+undersized+degraded, 121 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail; 51/627 objects degraded (8.134%) 2026-03-09T18:47:41.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:40 vm00 bash[65531]: cluster 2026-03-09T18:47:40.097074+0000 mgr.y (mgr.44107) 299 : cluster [DBG] pgmap v158: 161 pgs: 25 active+undersized, 15 active+undersized+degraded, 121 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail; 51/627 objects degraded (8.134%) 2026-03-09T18:47:41.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:40 vm00 bash[69512]: cluster 2026-03-09T18:47:40.097074+0000 mgr.y (mgr.44107) 299 : cluster [DBG] pgmap v158: 161 pgs: 25 active+undersized, 15 active+undersized+degraded, 121 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail; 51/627 objects degraded (8.134%) 2026-03-09T18:47:41.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:40 vm00 bash[69512]: cluster 2026-03-09T18:47:40.097074+0000 mgr.y (mgr.44107) 299 : cluster [DBG] pgmap v158: 161 pgs: 25 active+undersized, 15 active+undersized+degraded, 121 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail; 51/627 objects degraded (8.134%) 2026-03-09T18:47:42.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:42 vm08 bash[46122]: cluster 2026-03-09T18:47:42.148119+0000 mon.a (mon.0) 460 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 51/627 objects degraded (8.134%), 15 pgs degraded) 2026-03-09T18:47:42.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:42 vm08 bash[46122]: cluster 2026-03-09T18:47:42.148119+0000 mon.a (mon.0) 460 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 51/627 objects degraded (8.134%), 15 pgs degraded) 2026-03-09T18:47:42.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:42 vm08 bash[46122]: cluster 2026-03-09T18:47:42.148133+0000 mon.a (mon.0) 461 : cluster [INF] Cluster is now healthy 2026-03-09T18:47:42.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:42 vm08 bash[46122]: cluster 2026-03-09T18:47:42.148133+0000 mon.a (mon.0) 461 : cluster [INF] Cluster is now healthy 2026-03-09T18:47:42.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:42 vm00 bash[65531]: cluster 2026-03-09T18:47:42.148119+0000 mon.a (mon.0) 460 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 51/627 objects degraded (8.134%), 15 pgs degraded) 2026-03-09T18:47:42.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:42 vm00 bash[65531]: cluster 2026-03-09T18:47:42.148119+0000 mon.a (mon.0) 460 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 51/627 objects degraded (8.134%), 15 pgs degraded) 2026-03-09T18:47:42.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:42 vm00 bash[65531]: cluster 2026-03-09T18:47:42.148133+0000 mon.a (mon.0) 461 : cluster [INF] Cluster is now healthy 2026-03-09T18:47:42.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:42 vm00 bash[65531]: cluster 2026-03-09T18:47:42.148133+0000 mon.a (mon.0) 461 : cluster [INF] Cluster is now healthy 2026-03-09T18:47:42.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:42 vm00 bash[69512]: cluster 2026-03-09T18:47:42.148119+0000 mon.a (mon.0) 460 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 51/627 objects degraded (8.134%), 15 pgs degraded) 2026-03-09T18:47:42.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:42 vm00 bash[69512]: cluster 2026-03-09T18:47:42.148119+0000 mon.a (mon.0) 460 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 51/627 objects degraded (8.134%), 15 pgs degraded) 2026-03-09T18:47:42.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:42 vm00 bash[69512]: cluster 2026-03-09T18:47:42.148133+0000 mon.a (mon.0) 461 : cluster [INF] Cluster is now healthy 2026-03-09T18:47:42.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:42 vm00 bash[69512]: cluster 2026-03-09T18:47:42.148133+0000 mon.a (mon.0) 461 : cluster [INF] Cluster is now healthy 2026-03-09T18:47:43.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:43 vm08 bash[46122]: audit 2026-03-09T18:47:41.598895+0000 mgr.y (mgr.44107) 300 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:43.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:43 vm08 bash[46122]: audit 2026-03-09T18:47:41.598895+0000 mgr.y (mgr.44107) 300 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:43.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:43 vm08 bash[46122]: cluster 2026-03-09T18:47:42.097446+0000 mgr.y (mgr.44107) 301 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:47:43.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:43 vm08 bash[46122]: cluster 2026-03-09T18:47:42.097446+0000 mgr.y (mgr.44107) 301 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:47:43.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:43 vm00 bash[65531]: audit 2026-03-09T18:47:41.598895+0000 mgr.y (mgr.44107) 300 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:43.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:43 vm00 bash[65531]: audit 2026-03-09T18:47:41.598895+0000 mgr.y (mgr.44107) 300 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:43.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:43 vm00 bash[65531]: cluster 2026-03-09T18:47:42.097446+0000 mgr.y (mgr.44107) 301 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:47:43.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:43 vm00 bash[65531]: cluster 2026-03-09T18:47:42.097446+0000 mgr.y (mgr.44107) 301 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:47:43.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:43 vm00 bash[69512]: audit 2026-03-09T18:47:41.598895+0000 mgr.y (mgr.44107) 300 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:43.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:43 vm00 bash[69512]: audit 2026-03-09T18:47:41.598895+0000 mgr.y (mgr.44107) 300 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:43.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:43 vm00 bash[69512]: cluster 2026-03-09T18:47:42.097446+0000 mgr.y (mgr.44107) 301 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:47:43.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:43 vm00 bash[69512]: cluster 2026-03-09T18:47:42.097446+0000 mgr.y (mgr.44107) 301 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:47:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:45 vm08 bash[46122]: cluster 2026-03-09T18:47:44.097726+0000 mgr.y (mgr.44107) 302 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:47:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:45 vm08 bash[46122]: cluster 2026-03-09T18:47:44.097726+0000 mgr.y (mgr.44107) 302 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:47:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:45 vm08 bash[46122]: audit 2026-03-09T18:47:44.763282+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:45 vm08 bash[46122]: audit 2026-03-09T18:47:44.763282+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:45 vm08 bash[46122]: audit 2026-03-09T18:47:44.770828+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:45 vm08 bash[46122]: audit 2026-03-09T18:47:44.770828+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:45 vm08 bash[46122]: audit 2026-03-09T18:47:44.774026+0000 mon.c (mon.1) 303 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:45 vm08 bash[46122]: audit 2026-03-09T18:47:44.774026+0000 mon.c (mon.1) 303 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:45 vm08 bash[46122]: audit 2026-03-09T18:47:44.774931+0000 mon.c (mon.1) 304 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:47:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:45 vm08 bash[46122]: audit 2026-03-09T18:47:44.774931+0000 mon.c (mon.1) 304 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:47:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:45 vm08 bash[46122]: audit 2026-03-09T18:47:44.778688+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:45 vm08 bash[46122]: audit 2026-03-09T18:47:44.778688+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:45 vm08 bash[46122]: audit 2026-03-09T18:47:44.820833+0000 mon.c (mon.1) 305 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:45 vm08 bash[46122]: audit 2026-03-09T18:47:44.820833+0000 mon.c (mon.1) 305 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:45.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:45 vm08 bash[46122]: audit 2026-03-09T18:47:44.822091+0000 mon.c (mon.1) 306 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:45.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:45 vm08 bash[46122]: audit 2026-03-09T18:47:44.822091+0000 mon.c (mon.1) 306 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:45.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:45 vm08 bash[46122]: audit 2026-03-09T18:47:44.823381+0000 mon.c (mon.1) 307 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:45.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:45 vm08 bash[46122]: audit 2026-03-09T18:47:44.823381+0000 mon.c (mon.1) 307 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:45.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:45 vm08 bash[46122]: audit 2026-03-09T18:47:44.824222+0000 mon.c (mon.1) 308 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:45.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:45 vm08 bash[46122]: audit 2026-03-09T18:47:44.824222+0000 mon.c (mon.1) 308 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:45.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:45 vm08 bash[46122]: audit 2026-03-09T18:47:44.825151+0000 mon.c (mon.1) 309 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-09T18:47:45.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:45 vm08 bash[46122]: audit 2026-03-09T18:47:44.825151+0000 mon.c (mon.1) 309 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-09T18:47:45.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:45 vm00 bash[65531]: cluster 2026-03-09T18:47:44.097726+0000 mgr.y (mgr.44107) 302 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:45 vm00 bash[65531]: cluster 2026-03-09T18:47:44.097726+0000 mgr.y (mgr.44107) 302 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:45 vm00 bash[65531]: audit 2026-03-09T18:47:44.763282+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:45 vm00 bash[65531]: audit 2026-03-09T18:47:44.763282+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:45 vm00 bash[65531]: audit 2026-03-09T18:47:44.770828+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:45 vm00 bash[65531]: audit 2026-03-09T18:47:44.770828+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:45 vm00 bash[65531]: audit 2026-03-09T18:47:44.774026+0000 mon.c (mon.1) 303 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:45 vm00 bash[65531]: audit 2026-03-09T18:47:44.774026+0000 mon.c (mon.1) 303 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:45 vm00 bash[65531]: audit 2026-03-09T18:47:44.774931+0000 mon.c (mon.1) 304 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:45 vm00 bash[65531]: audit 2026-03-09T18:47:44.774931+0000 mon.c (mon.1) 304 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:45 vm00 bash[65531]: audit 2026-03-09T18:47:44.778688+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:45 vm00 bash[65531]: audit 2026-03-09T18:47:44.778688+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:45 vm00 bash[65531]: audit 2026-03-09T18:47:44.820833+0000 mon.c (mon.1) 305 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:45 vm00 bash[65531]: audit 2026-03-09T18:47:44.820833+0000 mon.c (mon.1) 305 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:45 vm00 bash[65531]: audit 2026-03-09T18:47:44.822091+0000 mon.c (mon.1) 306 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:45 vm00 bash[65531]: audit 2026-03-09T18:47:44.822091+0000 mon.c (mon.1) 306 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:45 vm00 bash[65531]: audit 2026-03-09T18:47:44.823381+0000 mon.c (mon.1) 307 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:45 vm00 bash[65531]: audit 2026-03-09T18:47:44.823381+0000 mon.c (mon.1) 307 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:45 vm00 bash[65531]: audit 2026-03-09T18:47:44.824222+0000 mon.c (mon.1) 308 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:45 vm00 bash[65531]: audit 2026-03-09T18:47:44.824222+0000 mon.c (mon.1) 308 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:45 vm00 bash[65531]: audit 2026-03-09T18:47:44.825151+0000 mon.c (mon.1) 309 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:45 vm00 bash[65531]: audit 2026-03-09T18:47:44.825151+0000 mon.c (mon.1) 309 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:45 vm00 bash[69512]: cluster 2026-03-09T18:47:44.097726+0000 mgr.y (mgr.44107) 302 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:45 vm00 bash[69512]: cluster 2026-03-09T18:47:44.097726+0000 mgr.y (mgr.44107) 302 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:45 vm00 bash[69512]: audit 2026-03-09T18:47:44.763282+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:45 vm00 bash[69512]: audit 2026-03-09T18:47:44.763282+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:45 vm00 bash[69512]: audit 2026-03-09T18:47:44.770828+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:45 vm00 bash[69512]: audit 2026-03-09T18:47:44.770828+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:45 vm00 bash[69512]: audit 2026-03-09T18:47:44.774026+0000 mon.c (mon.1) 303 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:45 vm00 bash[69512]: audit 2026-03-09T18:47:44.774026+0000 mon.c (mon.1) 303 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:45 vm00 bash[69512]: audit 2026-03-09T18:47:44.774931+0000 mon.c (mon.1) 304 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:45 vm00 bash[69512]: audit 2026-03-09T18:47:44.774931+0000 mon.c (mon.1) 304 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:45 vm00 bash[69512]: audit 2026-03-09T18:47:44.778688+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:45 vm00 bash[69512]: audit 2026-03-09T18:47:44.778688+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:45 vm00 bash[69512]: audit 2026-03-09T18:47:44.820833+0000 mon.c (mon.1) 305 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:45 vm00 bash[69512]: audit 2026-03-09T18:47:44.820833+0000 mon.c (mon.1) 305 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:45 vm00 bash[69512]: audit 2026-03-09T18:47:44.822091+0000 mon.c (mon.1) 306 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:45 vm00 bash[69512]: audit 2026-03-09T18:47:44.822091+0000 mon.c (mon.1) 306 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:45 vm00 bash[69512]: audit 2026-03-09T18:47:44.823381+0000 mon.c (mon.1) 307 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:45 vm00 bash[69512]: audit 2026-03-09T18:47:44.823381+0000 mon.c (mon.1) 307 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:45 vm00 bash[69512]: audit 2026-03-09T18:47:44.824222+0000 mon.c (mon.1) 308 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:45 vm00 bash[69512]: audit 2026-03-09T18:47:44.824222+0000 mon.c (mon.1) 308 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:45 vm00 bash[69512]: audit 2026-03-09T18:47:44.825151+0000 mon.c (mon.1) 309 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-09T18:47:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:45 vm00 bash[69512]: audit 2026-03-09T18:47:44.825151+0000 mon.c (mon.1) 309 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-09T18:47:46.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:45 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:46.224 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:47:45 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:46.224 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:45 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:46.224 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:45 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:46.224 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:45 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:46.224 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:45 vm08 systemd[1]: Stopping Ceph osd.7 for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:47:46.224 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:46 vm08 bash[30271]: debug 2026-03-09T18:47:46.011+0000 7fe13a771700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.7 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T18:47:46.224 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:46 vm08 bash[30271]: debug 2026-03-09T18:47:46.011+0000 7fe13a771700 -1 osd.7 140 *** Got signal Terminated *** 2026-03-09T18:47:46.224 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:46 vm08 bash[30271]: debug 2026-03-09T18:47:46.011+0000 7fe13a771700 -1 osd.7 140 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:47:46.224 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:47:45 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:46.224 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:47:45 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:46.225 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:47:45 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:46.225 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:47:45 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:46.576 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:46 vm08 bash[67764]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-osd-7 2026-03-09T18:47:46.576 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:46 vm08 bash[46122]: audit 2026-03-09T18:47:44.825462+0000 mgr.y (mgr.44107) 303 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-09T18:47:46.576 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:46 vm08 bash[46122]: audit 2026-03-09T18:47:44.825462+0000 mgr.y (mgr.44107) 303 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-09T18:47:46.576 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:46 vm08 bash[46122]: cephadm 2026-03-09T18:47:44.826057+0000 mgr.y (mgr.44107) 304 : cephadm [INF] Upgrade: osd.7 is safe to restart 2026-03-09T18:47:46.576 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:46 vm08 bash[46122]: cephadm 2026-03-09T18:47:44.826057+0000 mgr.y (mgr.44107) 304 : cephadm [INF] Upgrade: osd.7 is safe to restart 2026-03-09T18:47:46.576 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:46 vm08 bash[46122]: cephadm 2026-03-09T18:47:45.230956+0000 mgr.y (mgr.44107) 305 : cephadm [INF] Upgrade: Updating osd.7 2026-03-09T18:47:46.576 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:46 vm08 bash[46122]: cephadm 2026-03-09T18:47:45.230956+0000 mgr.y (mgr.44107) 305 : cephadm [INF] Upgrade: Updating osd.7 2026-03-09T18:47:46.576 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:46 vm08 bash[46122]: audit 2026-03-09T18:47:45.234195+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:46.576 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:46 vm08 bash[46122]: audit 2026-03-09T18:47:45.234195+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:46.576 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:46 vm08 bash[46122]: audit 2026-03-09T18:47:45.238517+0000 mon.c (mon.1) 310 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T18:47:46.576 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:46 vm08 bash[46122]: audit 2026-03-09T18:47:45.238517+0000 mon.c (mon.1) 310 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T18:47:46.576 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:46 vm08 bash[46122]: audit 2026-03-09T18:47:45.239353+0000 mon.c (mon.1) 311 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:46.576 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:46 vm08 bash[46122]: audit 2026-03-09T18:47:45.239353+0000 mon.c (mon.1) 311 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:46.576 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:46 vm08 bash[46122]: cephadm 2026-03-09T18:47:45.241114+0000 mgr.y (mgr.44107) 306 : cephadm [INF] Deploying daemon osd.7 on vm08 2026-03-09T18:47:46.576 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:46 vm08 bash[46122]: cephadm 2026-03-09T18:47:45.241114+0000 mgr.y (mgr.44107) 306 : cephadm [INF] Deploying daemon osd.7 on vm08 2026-03-09T18:47:46.576 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:46 vm08 bash[46122]: cluster 2026-03-09T18:47:46.019217+0000 mon.a (mon.0) 466 : cluster [INF] osd.7 marked itself down and dead 2026-03-09T18:47:46.576 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:46 vm08 bash[46122]: cluster 2026-03-09T18:47:46.019217+0000 mon.a (mon.0) 466 : cluster [INF] osd.7 marked itself down and dead 2026-03-09T18:47:46.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:46 vm00 bash[65531]: audit 2026-03-09T18:47:44.825462+0000 mgr.y (mgr.44107) 303 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-09T18:47:46.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:46 vm00 bash[65531]: audit 2026-03-09T18:47:44.825462+0000 mgr.y (mgr.44107) 303 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:46 vm00 bash[65531]: cephadm 2026-03-09T18:47:44.826057+0000 mgr.y (mgr.44107) 304 : cephadm [INF] Upgrade: osd.7 is safe to restart 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:46 vm00 bash[65531]: cephadm 2026-03-09T18:47:44.826057+0000 mgr.y (mgr.44107) 304 : cephadm [INF] Upgrade: osd.7 is safe to restart 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:46 vm00 bash[65531]: cephadm 2026-03-09T18:47:45.230956+0000 mgr.y (mgr.44107) 305 : cephadm [INF] Upgrade: Updating osd.7 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:46 vm00 bash[65531]: cephadm 2026-03-09T18:47:45.230956+0000 mgr.y (mgr.44107) 305 : cephadm [INF] Upgrade: Updating osd.7 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:46 vm00 bash[65531]: audit 2026-03-09T18:47:45.234195+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:46 vm00 bash[65531]: audit 2026-03-09T18:47:45.234195+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:46 vm00 bash[65531]: audit 2026-03-09T18:47:45.238517+0000 mon.c (mon.1) 310 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:46 vm00 bash[65531]: audit 2026-03-09T18:47:45.238517+0000 mon.c (mon.1) 310 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:46 vm00 bash[65531]: audit 2026-03-09T18:47:45.239353+0000 mon.c (mon.1) 311 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:46 vm00 bash[65531]: audit 2026-03-09T18:47:45.239353+0000 mon.c (mon.1) 311 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:46 vm00 bash[65531]: cephadm 2026-03-09T18:47:45.241114+0000 mgr.y (mgr.44107) 306 : cephadm [INF] Deploying daemon osd.7 on vm08 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:46 vm00 bash[65531]: cephadm 2026-03-09T18:47:45.241114+0000 mgr.y (mgr.44107) 306 : cephadm [INF] Deploying daemon osd.7 on vm08 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:46 vm00 bash[65531]: cluster 2026-03-09T18:47:46.019217+0000 mon.a (mon.0) 466 : cluster [INF] osd.7 marked itself down and dead 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:46 vm00 bash[65531]: cluster 2026-03-09T18:47:46.019217+0000 mon.a (mon.0) 466 : cluster [INF] osd.7 marked itself down and dead 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:46 vm00 bash[69512]: audit 2026-03-09T18:47:44.825462+0000 mgr.y (mgr.44107) 303 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:46 vm00 bash[69512]: audit 2026-03-09T18:47:44.825462+0000 mgr.y (mgr.44107) 303 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:46 vm00 bash[69512]: cephadm 2026-03-09T18:47:44.826057+0000 mgr.y (mgr.44107) 304 : cephadm [INF] Upgrade: osd.7 is safe to restart 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:46 vm00 bash[69512]: cephadm 2026-03-09T18:47:44.826057+0000 mgr.y (mgr.44107) 304 : cephadm [INF] Upgrade: osd.7 is safe to restart 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:46 vm00 bash[69512]: cephadm 2026-03-09T18:47:45.230956+0000 mgr.y (mgr.44107) 305 : cephadm [INF] Upgrade: Updating osd.7 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:46 vm00 bash[69512]: cephadm 2026-03-09T18:47:45.230956+0000 mgr.y (mgr.44107) 305 : cephadm [INF] Upgrade: Updating osd.7 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:46 vm00 bash[69512]: audit 2026-03-09T18:47:45.234195+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:46 vm00 bash[69512]: audit 2026-03-09T18:47:45.234195+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:46 vm00 bash[69512]: audit 2026-03-09T18:47:45.238517+0000 mon.c (mon.1) 310 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:46 vm00 bash[69512]: audit 2026-03-09T18:47:45.238517+0000 mon.c (mon.1) 310 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:46 vm00 bash[69512]: audit 2026-03-09T18:47:45.239353+0000 mon.c (mon.1) 311 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:46 vm00 bash[69512]: audit 2026-03-09T18:47:45.239353+0000 mon.c (mon.1) 311 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:46 vm00 bash[69512]: cephadm 2026-03-09T18:47:45.241114+0000 mgr.y (mgr.44107) 306 : cephadm [INF] Deploying daemon osd.7 on vm08 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:46 vm00 bash[69512]: cephadm 2026-03-09T18:47:45.241114+0000 mgr.y (mgr.44107) 306 : cephadm [INF] Deploying daemon osd.7 on vm08 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:46 vm00 bash[69512]: cluster 2026-03-09T18:47:46.019217+0000 mon.a (mon.0) 466 : cluster [INF] osd.7 marked itself down and dead 2026-03-09T18:47:46.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:46 vm00 bash[69512]: cluster 2026-03-09T18:47:46.019217+0000 mon.a (mon.0) 466 : cluster [INF] osd.7 marked itself down and dead 2026-03-09T18:47:46.891 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:47:46 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:46.891 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:46 vm08 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.7.service: Deactivated successfully. 2026-03-09T18:47:46.892 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:46 vm08 systemd[1]: Stopped Ceph osd.7 for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:47:46.892 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:46 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:46.892 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:46 vm08 systemd[1]: Started Ceph osd.7 for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:47:46.892 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:47:46 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:46.892 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:46 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:46.892 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:47:46 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:46.892 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:47:46 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:46.892 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:47:46 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:46.892 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:47:46 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:46.892 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:47:46 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:47:47.224 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:47 vm08 bash[67980]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:47:47.224 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:47 vm08 bash[67980]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:47:47.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:47 vm00 bash[65531]: cluster 2026-03-09T18:47:46.098361+0000 mgr.y (mgr.44107) 307 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:47:47.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:47 vm00 bash[65531]: cluster 2026-03-09T18:47:46.098361+0000 mgr.y (mgr.44107) 307 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:47:47.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:47 vm00 bash[65531]: cluster 2026-03-09T18:47:46.234560+0000 mon.a (mon.0) 467 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:47:47.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:47 vm00 bash[65531]: cluster 2026-03-09T18:47:46.234560+0000 mon.a (mon.0) 467 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:47:47.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:47 vm00 bash[65531]: cluster 2026-03-09T18:47:46.234576+0000 mon.a (mon.0) 468 : cluster [WRN] Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-09T18:47:47.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:47 vm00 bash[65531]: cluster 2026-03-09T18:47:46.234576+0000 mon.a (mon.0) 468 : cluster [WRN] Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-09T18:47:47.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:47 vm00 bash[65531]: cluster 2026-03-09T18:47:46.268440+0000 mon.a (mon.0) 469 : cluster [DBG] osdmap e141: 8 total, 7 up, 8 in 2026-03-09T18:47:47.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:47 vm00 bash[65531]: cluster 2026-03-09T18:47:46.268440+0000 mon.a (mon.0) 469 : cluster [DBG] osdmap e141: 8 total, 7 up, 8 in 2026-03-09T18:47:47.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:47 vm00 bash[65531]: audit 2026-03-09T18:47:46.825173+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:47.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:47 vm00 bash[65531]: audit 2026-03-09T18:47:46.825173+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:47.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:47 vm00 bash[65531]: audit 2026-03-09T18:47:46.831778+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:47.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:47 vm00 bash[65531]: audit 2026-03-09T18:47:46.831778+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:47.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:47 vm00 bash[65531]: audit 2026-03-09T18:47:46.834531+0000 mon.c (mon.1) 312 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:47.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:47 vm00 bash[65531]: audit 2026-03-09T18:47:46.834531+0000 mon.c (mon.1) 312 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:47.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:47 vm00 bash[69512]: cluster 2026-03-09T18:47:46.098361+0000 mgr.y (mgr.44107) 307 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:47:47.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:47 vm00 bash[69512]: cluster 2026-03-09T18:47:46.098361+0000 mgr.y (mgr.44107) 307 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:47:47.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:47 vm00 bash[69512]: cluster 2026-03-09T18:47:46.234560+0000 mon.a (mon.0) 467 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:47:47.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:47 vm00 bash[69512]: cluster 2026-03-09T18:47:46.234560+0000 mon.a (mon.0) 467 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:47:47.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:47 vm00 bash[69512]: cluster 2026-03-09T18:47:46.234576+0000 mon.a (mon.0) 468 : cluster [WRN] Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-09T18:47:47.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:47 vm00 bash[69512]: cluster 2026-03-09T18:47:46.234576+0000 mon.a (mon.0) 468 : cluster [WRN] Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-09T18:47:47.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:47 vm00 bash[69512]: cluster 2026-03-09T18:47:46.268440+0000 mon.a (mon.0) 469 : cluster [DBG] osdmap e141: 8 total, 7 up, 8 in 2026-03-09T18:47:47.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:47 vm00 bash[69512]: cluster 2026-03-09T18:47:46.268440+0000 mon.a (mon.0) 469 : cluster [DBG] osdmap e141: 8 total, 7 up, 8 in 2026-03-09T18:47:47.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:47 vm00 bash[69512]: audit 2026-03-09T18:47:46.825173+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:47.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:47 vm00 bash[69512]: audit 2026-03-09T18:47:46.825173+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:47.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:47 vm00 bash[69512]: audit 2026-03-09T18:47:46.831778+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:47.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:47 vm00 bash[69512]: audit 2026-03-09T18:47:46.831778+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:47.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:47 vm00 bash[69512]: audit 2026-03-09T18:47:46.834531+0000 mon.c (mon.1) 312 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:47.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:47 vm00 bash[69512]: audit 2026-03-09T18:47:46.834531+0000 mon.c (mon.1) 312 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:47 vm08 bash[46122]: cluster 2026-03-09T18:47:46.098361+0000 mgr.y (mgr.44107) 307 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:47:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:47 vm08 bash[46122]: cluster 2026-03-09T18:47:46.098361+0000 mgr.y (mgr.44107) 307 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:47:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:47 vm08 bash[46122]: cluster 2026-03-09T18:47:46.234560+0000 mon.a (mon.0) 467 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:47:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:47 vm08 bash[46122]: cluster 2026-03-09T18:47:46.234560+0000 mon.a (mon.0) 467 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T18:47:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:47 vm08 bash[46122]: cluster 2026-03-09T18:47:46.234576+0000 mon.a (mon.0) 468 : cluster [WRN] Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-09T18:47:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:47 vm08 bash[46122]: cluster 2026-03-09T18:47:46.234576+0000 mon.a (mon.0) 468 : cluster [WRN] Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-09T18:47:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:47 vm08 bash[46122]: cluster 2026-03-09T18:47:46.268440+0000 mon.a (mon.0) 469 : cluster [DBG] osdmap e141: 8 total, 7 up, 8 in 2026-03-09T18:47:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:47 vm08 bash[46122]: cluster 2026-03-09T18:47:46.268440+0000 mon.a (mon.0) 469 : cluster [DBG] osdmap e141: 8 total, 7 up, 8 in 2026-03-09T18:47:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:47 vm08 bash[46122]: audit 2026-03-09T18:47:46.825173+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:47 vm08 bash[46122]: audit 2026-03-09T18:47:46.825173+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:47 vm08 bash[46122]: audit 2026-03-09T18:47:46.831778+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:47 vm08 bash[46122]: audit 2026-03-09T18:47:46.831778+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:47 vm08 bash[46122]: audit 2026-03-09T18:47:46.834531+0000 mon.c (mon.1) 312 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:47 vm08 bash[46122]: audit 2026-03-09T18:47:46.834531+0000 mon.c (mon.1) 312 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:48.207 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:47 vm08 bash[67980]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T18:47:48.207 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:47 vm08 bash[67980]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:47:48.207 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:47 vm08 bash[67980]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T18:47:48.208 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:47 vm08 bash[67980]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-7 2026-03-09T18:47:48.208 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:47 vm08 bash[67980]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-fc3905db-d109-4f7b-8f1d-34476a1114ad/osd-block-e8972f61-b0b9-45d8-8b8e-e660f598240a --path /var/lib/ceph/osd/ceph-7 --no-mon-config 2026-03-09T18:47:48.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:48 vm08 bash[67980]: Running command: /usr/bin/ln -snf /dev/ceph-fc3905db-d109-4f7b-8f1d-34476a1114ad/osd-block-e8972f61-b0b9-45d8-8b8e-e660f598240a /var/lib/ceph/osd/ceph-7/block 2026-03-09T18:47:48.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:48 vm08 bash[67980]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-7/block 2026-03-09T18:47:48.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:48 vm08 bash[67980]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-3 2026-03-09T18:47:48.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:48 vm08 bash[67980]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-7 2026-03-09T18:47:48.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:48 vm08 bash[67980]: --> ceph-volume lvm activate successful for osd ID: 7 2026-03-09T18:47:48.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:48 vm08 bash[46122]: cluster 2026-03-09T18:47:47.275999+0000 mon.a (mon.0) 472 : cluster [DBG] osdmap e142: 8 total, 7 up, 8 in 2026-03-09T18:47:48.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:48 vm08 bash[46122]: cluster 2026-03-09T18:47:47.275999+0000 mon.a (mon.0) 472 : cluster [DBG] osdmap e142: 8 total, 7 up, 8 in 2026-03-09T18:47:48.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:48 vm08 bash[46122]: audit 2026-03-09T18:47:48.105739+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:48.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:48 vm08 bash[46122]: audit 2026-03-09T18:47:48.105739+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:48.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:48 vm08 bash[46122]: audit 2026-03-09T18:47:48.107502+0000 mon.c (mon.1) 313 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:47:48.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:48 vm08 bash[46122]: audit 2026-03-09T18:47:48.107502+0000 mon.c (mon.1) 313 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:47:48.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:48 vm00 bash[65531]: cluster 2026-03-09T18:47:47.275999+0000 mon.a (mon.0) 472 : cluster [DBG] osdmap e142: 8 total, 7 up, 8 in 2026-03-09T18:47:48.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:48 vm00 bash[65531]: cluster 2026-03-09T18:47:47.275999+0000 mon.a (mon.0) 472 : cluster [DBG] osdmap e142: 8 total, 7 up, 8 in 2026-03-09T18:47:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:48 vm00 bash[65531]: audit 2026-03-09T18:47:48.105739+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:48 vm00 bash[65531]: audit 2026-03-09T18:47:48.105739+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:48 vm00 bash[65531]: audit 2026-03-09T18:47:48.107502+0000 mon.c (mon.1) 313 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:47:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:48 vm00 bash[65531]: audit 2026-03-09T18:47:48.107502+0000 mon.c (mon.1) 313 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:47:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:48 vm00 bash[69512]: cluster 2026-03-09T18:47:47.275999+0000 mon.a (mon.0) 472 : cluster [DBG] osdmap e142: 8 total, 7 up, 8 in 2026-03-09T18:47:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:48 vm00 bash[69512]: cluster 2026-03-09T18:47:47.275999+0000 mon.a (mon.0) 472 : cluster [DBG] osdmap e142: 8 total, 7 up, 8 in 2026-03-09T18:47:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:48 vm00 bash[69512]: audit 2026-03-09T18:47:48.105739+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:48 vm00 bash[69512]: audit 2026-03-09T18:47:48.105739+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:48 vm00 bash[69512]: audit 2026-03-09T18:47:48.107502+0000 mon.c (mon.1) 313 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:47:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:48 vm00 bash[69512]: audit 2026-03-09T18:47:48.107502+0000 mon.c (mon.1) 313 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:47:49.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:49 vm08 bash[68327]: debug 2026-03-09T18:47:49.055+0000 7f8fe2ef2740 -1 Falling back to public interface 2026-03-09T18:47:49.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:49 vm08 bash[46122]: cluster 2026-03-09T18:47:48.099638+0000 mgr.y (mgr.44107) 308 : cluster [DBG] pgmap v164: 161 pgs: 6 active+undersized, 23 peering, 7 stale+active+clean, 1 active+undersized+degraded, 124 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail; 1/627 objects degraded (0.159%) 2026-03-09T18:47:49.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:49 vm08 bash[46122]: cluster 2026-03-09T18:47:48.099638+0000 mgr.y (mgr.44107) 308 : cluster [DBG] pgmap v164: 161 pgs: 6 active+undersized, 23 peering, 7 stale+active+clean, 1 active+undersized+degraded, 124 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail; 1/627 objects degraded (0.159%) 2026-03-09T18:47:49.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:49 vm08 bash[46122]: cluster 2026-03-09T18:47:48.273721+0000 mon.a (mon.0) 474 : cluster [WRN] Health check failed: Reduced data availability: 5 pgs peering (PG_AVAILABILITY) 2026-03-09T18:47:49.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:49 vm08 bash[46122]: cluster 2026-03-09T18:47:48.273721+0000 mon.a (mon.0) 474 : cluster [WRN] Health check failed: Reduced data availability: 5 pgs peering (PG_AVAILABILITY) 2026-03-09T18:47:49.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:49 vm08 bash[46122]: cluster 2026-03-09T18:47:48.273757+0000 mon.a (mon.0) 475 : cluster [WRN] Health check failed: Degraded data redundancy: 1/627 objects degraded (0.159%), 1 pg degraded (PG_DEGRADED) 2026-03-09T18:47:49.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:49 vm08 bash[46122]: cluster 2026-03-09T18:47:48.273757+0000 mon.a (mon.0) 475 : cluster [WRN] Health check failed: Degraded data redundancy: 1/627 objects degraded (0.159%), 1 pg degraded (PG_DEGRADED) 2026-03-09T18:47:49.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:49 vm08 bash[46122]: audit 2026-03-09T18:47:48.308567+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:49.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:49 vm08 bash[46122]: audit 2026-03-09T18:47:48.308567+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:49.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:49 vm00 bash[65531]: cluster 2026-03-09T18:47:48.099638+0000 mgr.y (mgr.44107) 308 : cluster [DBG] pgmap v164: 161 pgs: 6 active+undersized, 23 peering, 7 stale+active+clean, 1 active+undersized+degraded, 124 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail; 1/627 objects degraded (0.159%) 2026-03-09T18:47:49.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:49 vm00 bash[65531]: cluster 2026-03-09T18:47:48.099638+0000 mgr.y (mgr.44107) 308 : cluster [DBG] pgmap v164: 161 pgs: 6 active+undersized, 23 peering, 7 stale+active+clean, 1 active+undersized+degraded, 124 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail; 1/627 objects degraded (0.159%) 2026-03-09T18:47:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:49 vm00 bash[65531]: cluster 2026-03-09T18:47:48.273721+0000 mon.a (mon.0) 474 : cluster [WRN] Health check failed: Reduced data availability: 5 pgs peering (PG_AVAILABILITY) 2026-03-09T18:47:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:49 vm00 bash[65531]: cluster 2026-03-09T18:47:48.273721+0000 mon.a (mon.0) 474 : cluster [WRN] Health check failed: Reduced data availability: 5 pgs peering (PG_AVAILABILITY) 2026-03-09T18:47:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:49 vm00 bash[65531]: cluster 2026-03-09T18:47:48.273757+0000 mon.a (mon.0) 475 : cluster [WRN] Health check failed: Degraded data redundancy: 1/627 objects degraded (0.159%), 1 pg degraded (PG_DEGRADED) 2026-03-09T18:47:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:49 vm00 bash[65531]: cluster 2026-03-09T18:47:48.273757+0000 mon.a (mon.0) 475 : cluster [WRN] Health check failed: Degraded data redundancy: 1/627 objects degraded (0.159%), 1 pg degraded (PG_DEGRADED) 2026-03-09T18:47:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:49 vm00 bash[65531]: audit 2026-03-09T18:47:48.308567+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:49.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:49 vm00 bash[65531]: audit 2026-03-09T18:47:48.308567+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:49 vm00 bash[69512]: cluster 2026-03-09T18:47:48.099638+0000 mgr.y (mgr.44107) 308 : cluster [DBG] pgmap v164: 161 pgs: 6 active+undersized, 23 peering, 7 stale+active+clean, 1 active+undersized+degraded, 124 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail; 1/627 objects degraded (0.159%) 2026-03-09T18:47:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:49 vm00 bash[69512]: cluster 2026-03-09T18:47:48.099638+0000 mgr.y (mgr.44107) 308 : cluster [DBG] pgmap v164: 161 pgs: 6 active+undersized, 23 peering, 7 stale+active+clean, 1 active+undersized+degraded, 124 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail; 1/627 objects degraded (0.159%) 2026-03-09T18:47:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:49 vm00 bash[69512]: cluster 2026-03-09T18:47:48.273721+0000 mon.a (mon.0) 474 : cluster [WRN] Health check failed: Reduced data availability: 5 pgs peering (PG_AVAILABILITY) 2026-03-09T18:47:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:49 vm00 bash[69512]: cluster 2026-03-09T18:47:48.273721+0000 mon.a (mon.0) 474 : cluster [WRN] Health check failed: Reduced data availability: 5 pgs peering (PG_AVAILABILITY) 2026-03-09T18:47:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:49 vm00 bash[69512]: cluster 2026-03-09T18:47:48.273757+0000 mon.a (mon.0) 475 : cluster [WRN] Health check failed: Degraded data redundancy: 1/627 objects degraded (0.159%), 1 pg degraded (PG_DEGRADED) 2026-03-09T18:47:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:49 vm00 bash[69512]: cluster 2026-03-09T18:47:48.273757+0000 mon.a (mon.0) 475 : cluster [WRN] Health check failed: Degraded data redundancy: 1/627 objects degraded (0.159%), 1 pg degraded (PG_DEGRADED) 2026-03-09T18:47:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:49 vm00 bash[69512]: audit 2026-03-09T18:47:48.308567+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:49 vm00 bash[69512]: audit 2026-03-09T18:47:48.308567+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:49.629 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:47:49 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:47:49] "GET /metrics HTTP/1.1" 200 37945 "" "Prometheus/2.51.0" 2026-03-09T18:47:50.724 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:50 vm08 bash[68327]: debug 2026-03-09T18:47:50.283+0000 7f8fe2ef2740 -1 osd.7 0 read_superblock omap replica is missing. 2026-03-09T18:47:50.724 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:50 vm08 bash[68327]: debug 2026-03-09T18:47:50.327+0000 7f8fe2ef2740 -1 osd.7 140 log_to_monitors true 2026-03-09T18:47:51.603 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:51 vm00 bash[65531]: cluster 2026-03-09T18:47:50.100040+0000 mgr.y (mgr.44107) 309 : cluster [DBG] pgmap v165: 161 pgs: 15 active+undersized, 23 peering, 4 stale+active+clean, 8 active+undersized+degraded, 111 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail; 12/627 objects degraded (1.914%) 2026-03-09T18:47:51.603 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:51 vm00 bash[65531]: cluster 2026-03-09T18:47:50.100040+0000 mgr.y (mgr.44107) 309 : cluster [DBG] pgmap v165: 161 pgs: 15 active+undersized, 23 peering, 4 stale+active+clean, 8 active+undersized+degraded, 111 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail; 12/627 objects degraded (1.914%) 2026-03-09T18:47:51.603 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:51 vm00 bash[65531]: audit 2026-03-09T18:47:50.333416+0000 mon.b (mon.2) 23 : audit [INF] from='osd.7 [v2:192.168.123.108:6824/3442441785,v1:192.168.123.108:6825/3442441785]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T18:47:51.603 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:51 vm00 bash[65531]: audit 2026-03-09T18:47:50.333416+0000 mon.b (mon.2) 23 : audit [INF] from='osd.7 [v2:192.168.123.108:6824/3442441785,v1:192.168.123.108:6825/3442441785]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T18:47:51.604 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:51 vm00 bash[65531]: audit 2026-03-09T18:47:50.336661+0000 mon.a (mon.0) 477 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T18:47:51.604 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:51 vm00 bash[65531]: audit 2026-03-09T18:47:50.336661+0000 mon.a (mon.0) 477 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T18:47:51.604 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:51 vm00 bash[69512]: cluster 2026-03-09T18:47:50.100040+0000 mgr.y (mgr.44107) 309 : cluster [DBG] pgmap v165: 161 pgs: 15 active+undersized, 23 peering, 4 stale+active+clean, 8 active+undersized+degraded, 111 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail; 12/627 objects degraded (1.914%) 2026-03-09T18:47:51.604 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:51 vm00 bash[69512]: cluster 2026-03-09T18:47:50.100040+0000 mgr.y (mgr.44107) 309 : cluster [DBG] pgmap v165: 161 pgs: 15 active+undersized, 23 peering, 4 stale+active+clean, 8 active+undersized+degraded, 111 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail; 12/627 objects degraded (1.914%) 2026-03-09T18:47:51.604 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:51 vm00 bash[69512]: audit 2026-03-09T18:47:50.333416+0000 mon.b (mon.2) 23 : audit [INF] from='osd.7 [v2:192.168.123.108:6824/3442441785,v1:192.168.123.108:6825/3442441785]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T18:47:51.604 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:51 vm00 bash[69512]: audit 2026-03-09T18:47:50.333416+0000 mon.b (mon.2) 23 : audit [INF] from='osd.7 [v2:192.168.123.108:6824/3442441785,v1:192.168.123.108:6825/3442441785]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T18:47:51.604 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:51 vm00 bash[69512]: audit 2026-03-09T18:47:50.336661+0000 mon.a (mon.0) 477 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T18:47:51.604 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:51 vm00 bash[69512]: audit 2026-03-09T18:47:50.336661+0000 mon.a (mon.0) 477 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T18:47:51.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:51 vm08 bash[46122]: cluster 2026-03-09T18:47:50.100040+0000 mgr.y (mgr.44107) 309 : cluster [DBG] pgmap v165: 161 pgs: 15 active+undersized, 23 peering, 4 stale+active+clean, 8 active+undersized+degraded, 111 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail; 12/627 objects degraded (1.914%) 2026-03-09T18:47:51.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:51 vm08 bash[46122]: cluster 2026-03-09T18:47:50.100040+0000 mgr.y (mgr.44107) 309 : cluster [DBG] pgmap v165: 161 pgs: 15 active+undersized, 23 peering, 4 stale+active+clean, 8 active+undersized+degraded, 111 active+clean; 457 KiB data, 250 MiB used, 160 GiB / 160 GiB avail; 12/627 objects degraded (1.914%) 2026-03-09T18:47:51.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:51 vm08 bash[46122]: audit 2026-03-09T18:47:50.333416+0000 mon.b (mon.2) 23 : audit [INF] from='osd.7 [v2:192.168.123.108:6824/3442441785,v1:192.168.123.108:6825/3442441785]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T18:47:51.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:51 vm08 bash[46122]: audit 2026-03-09T18:47:50.333416+0000 mon.b (mon.2) 23 : audit [INF] from='osd.7 [v2:192.168.123.108:6824/3442441785,v1:192.168.123.108:6825/3442441785]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T18:47:51.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:51 vm08 bash[46122]: audit 2026-03-09T18:47:50.336661+0000 mon.a (mon.0) 477 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T18:47:51.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:51 vm08 bash[46122]: audit 2026-03-09T18:47:50.336661+0000 mon.a (mon.0) 477 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T18:47:52.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:47:52 vm08 bash[68327]: debug 2026-03-09T18:47:52.147+0000 7f8fda49c640 -1 osd.7 140 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T18:47:52.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:52 vm08 bash[46122]: audit 2026-03-09T18:47:51.291673+0000 mon.a (mon.0) 478 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T18:47:52.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:52 vm08 bash[46122]: audit 2026-03-09T18:47:51.291673+0000 mon.a (mon.0) 478 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T18:47:52.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:52 vm08 bash[46122]: cluster 2026-03-09T18:47:51.294794+0000 mon.a (mon.0) 479 : cluster [DBG] osdmap e143: 8 total, 7 up, 8 in 2026-03-09T18:47:52.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:52 vm08 bash[46122]: cluster 2026-03-09T18:47:51.294794+0000 mon.a (mon.0) 479 : cluster [DBG] osdmap e143: 8 total, 7 up, 8 in 2026-03-09T18:47:52.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:52 vm08 bash[46122]: audit 2026-03-09T18:47:51.294991+0000 mon.b (mon.2) 24 : audit [INF] from='osd.7 [v2:192.168.123.108:6824/3442441785,v1:192.168.123.108:6825/3442441785]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:52.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:52 vm08 bash[46122]: audit 2026-03-09T18:47:51.294991+0000 mon.b (mon.2) 24 : audit [INF] from='osd.7 [v2:192.168.123.108:6824/3442441785,v1:192.168.123.108:6825/3442441785]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:52.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:52 vm08 bash[46122]: audit 2026-03-09T18:47:51.307039+0000 mon.a (mon.0) 480 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:52.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:52 vm08 bash[46122]: audit 2026-03-09T18:47:51.307039+0000 mon.a (mon.0) 480 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:52 vm00 bash[65531]: audit 2026-03-09T18:47:51.291673+0000 mon.a (mon.0) 478 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T18:47:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:52 vm00 bash[65531]: audit 2026-03-09T18:47:51.291673+0000 mon.a (mon.0) 478 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T18:47:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:52 vm00 bash[65531]: cluster 2026-03-09T18:47:51.294794+0000 mon.a (mon.0) 479 : cluster [DBG] osdmap e143: 8 total, 7 up, 8 in 2026-03-09T18:47:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:52 vm00 bash[65531]: cluster 2026-03-09T18:47:51.294794+0000 mon.a (mon.0) 479 : cluster [DBG] osdmap e143: 8 total, 7 up, 8 in 2026-03-09T18:47:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:52 vm00 bash[65531]: audit 2026-03-09T18:47:51.294991+0000 mon.b (mon.2) 24 : audit [INF] from='osd.7 [v2:192.168.123.108:6824/3442441785,v1:192.168.123.108:6825/3442441785]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:52 vm00 bash[65531]: audit 2026-03-09T18:47:51.294991+0000 mon.b (mon.2) 24 : audit [INF] from='osd.7 [v2:192.168.123.108:6824/3442441785,v1:192.168.123.108:6825/3442441785]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:52 vm00 bash[65531]: audit 2026-03-09T18:47:51.307039+0000 mon.a (mon.0) 480 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:52.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:52 vm00 bash[65531]: audit 2026-03-09T18:47:51.307039+0000 mon.a (mon.0) 480 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:52 vm00 bash[69512]: audit 2026-03-09T18:47:51.291673+0000 mon.a (mon.0) 478 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T18:47:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:52 vm00 bash[69512]: audit 2026-03-09T18:47:51.291673+0000 mon.a (mon.0) 478 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T18:47:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:52 vm00 bash[69512]: cluster 2026-03-09T18:47:51.294794+0000 mon.a (mon.0) 479 : cluster [DBG] osdmap e143: 8 total, 7 up, 8 in 2026-03-09T18:47:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:52 vm00 bash[69512]: cluster 2026-03-09T18:47:51.294794+0000 mon.a (mon.0) 479 : cluster [DBG] osdmap e143: 8 total, 7 up, 8 in 2026-03-09T18:47:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:52 vm00 bash[69512]: audit 2026-03-09T18:47:51.294991+0000 mon.b (mon.2) 24 : audit [INF] from='osd.7 [v2:192.168.123.108:6824/3442441785,v1:192.168.123.108:6825/3442441785]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:52 vm00 bash[69512]: audit 2026-03-09T18:47:51.294991+0000 mon.b (mon.2) 24 : audit [INF] from='osd.7 [v2:192.168.123.108:6824/3442441785,v1:192.168.123.108:6825/3442441785]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:52 vm00 bash[69512]: audit 2026-03-09T18:47:51.307039+0000 mon.a (mon.0) 480 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:52.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:52 vm00 bash[69512]: audit 2026-03-09T18:47:51.307039+0000 mon.a (mon.0) 480 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:53 vm00 bash[65531]: audit 2026-03-09T18:47:51.606199+0000 mgr.y (mgr.44107) 310 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:53 vm00 bash[65531]: audit 2026-03-09T18:47:51.606199+0000 mgr.y (mgr.44107) 310 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:53 vm00 bash[65531]: cluster 2026-03-09T18:47:52.100468+0000 mgr.y (mgr.44107) 311 : cluster [DBG] pgmap v167: 161 pgs: 41 active+undersized, 27 active+undersized+degraded, 93 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 83/627 objects degraded (13.238%) 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:53 vm00 bash[65531]: cluster 2026-03-09T18:47:52.100468+0000 mgr.y (mgr.44107) 311 : cluster [DBG] pgmap v167: 161 pgs: 41 active+undersized, 27 active+undersized+degraded, 93 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 83/627 objects degraded (13.238%) 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:53 vm00 bash[65531]: cluster 2026-03-09T18:47:52.300884+0000 mon.a (mon.0) 481 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 5 pgs peering) 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:53 vm00 bash[65531]: cluster 2026-03-09T18:47:52.300884+0000 mon.a (mon.0) 481 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 5 pgs peering) 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:53 vm00 bash[65531]: cluster 2026-03-09T18:47:52.303129+0000 mon.a (mon.0) 482 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:53 vm00 bash[65531]: cluster 2026-03-09T18:47:52.303129+0000 mon.a (mon.0) 482 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:53 vm00 bash[65531]: audit 2026-03-09T18:47:52.336345+0000 mon.c (mon.1) 314 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:53 vm00 bash[65531]: audit 2026-03-09T18:47:52.336345+0000 mon.c (mon.1) 314 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:53 vm00 bash[65531]: cluster 2026-03-09T18:47:52.343625+0000 mon.a (mon.0) 483 : cluster [INF] osd.7 [v2:192.168.123.108:6824/3442441785,v1:192.168.123.108:6825/3442441785] boot 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:53 vm00 bash[65531]: cluster 2026-03-09T18:47:52.343625+0000 mon.a (mon.0) 483 : cluster [INF] osd.7 [v2:192.168.123.108:6824/3442441785,v1:192.168.123.108:6825/3442441785] boot 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:53 vm00 bash[65531]: cluster 2026-03-09T18:47:52.343765+0000 mon.a (mon.0) 484 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:53 vm00 bash[65531]: cluster 2026-03-09T18:47:52.343765+0000 mon.a (mon.0) 484 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:53 vm00 bash[65531]: audit 2026-03-09T18:47:53.210236+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:53 vm00 bash[65531]: audit 2026-03-09T18:47:53.210236+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:53 vm00 bash[65531]: audit 2026-03-09T18:47:53.216626+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:53 vm00 bash[65531]: audit 2026-03-09T18:47:53.216626+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:53 vm00 bash[69512]: audit 2026-03-09T18:47:51.606199+0000 mgr.y (mgr.44107) 310 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:53 vm00 bash[69512]: audit 2026-03-09T18:47:51.606199+0000 mgr.y (mgr.44107) 310 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:53 vm00 bash[69512]: cluster 2026-03-09T18:47:52.100468+0000 mgr.y (mgr.44107) 311 : cluster [DBG] pgmap v167: 161 pgs: 41 active+undersized, 27 active+undersized+degraded, 93 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 83/627 objects degraded (13.238%) 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:53 vm00 bash[69512]: cluster 2026-03-09T18:47:52.100468+0000 mgr.y (mgr.44107) 311 : cluster [DBG] pgmap v167: 161 pgs: 41 active+undersized, 27 active+undersized+degraded, 93 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 83/627 objects degraded (13.238%) 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:53 vm00 bash[69512]: cluster 2026-03-09T18:47:52.300884+0000 mon.a (mon.0) 481 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 5 pgs peering) 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:53 vm00 bash[69512]: cluster 2026-03-09T18:47:52.300884+0000 mon.a (mon.0) 481 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 5 pgs peering) 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:53 vm00 bash[69512]: cluster 2026-03-09T18:47:52.303129+0000 mon.a (mon.0) 482 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:53 vm00 bash[69512]: cluster 2026-03-09T18:47:52.303129+0000 mon.a (mon.0) 482 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:53 vm00 bash[69512]: audit 2026-03-09T18:47:52.336345+0000 mon.c (mon.1) 314 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:53 vm00 bash[69512]: audit 2026-03-09T18:47:52.336345+0000 mon.c (mon.1) 314 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:53 vm00 bash[69512]: cluster 2026-03-09T18:47:52.343625+0000 mon.a (mon.0) 483 : cluster [INF] osd.7 [v2:192.168.123.108:6824/3442441785,v1:192.168.123.108:6825/3442441785] boot 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:53 vm00 bash[69512]: cluster 2026-03-09T18:47:52.343625+0000 mon.a (mon.0) 483 : cluster [INF] osd.7 [v2:192.168.123.108:6824/3442441785,v1:192.168.123.108:6825/3442441785] boot 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:53 vm00 bash[69512]: cluster 2026-03-09T18:47:52.343765+0000 mon.a (mon.0) 484 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:53 vm00 bash[69512]: cluster 2026-03-09T18:47:52.343765+0000 mon.a (mon.0) 484 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:53 vm00 bash[69512]: audit 2026-03-09T18:47:53.210236+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:53 vm00 bash[69512]: audit 2026-03-09T18:47:53.210236+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:53 vm00 bash[69512]: audit 2026-03-09T18:47:53.216626+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:53.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:53 vm00 bash[69512]: audit 2026-03-09T18:47:53.216626+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:53 vm08 bash[46122]: audit 2026-03-09T18:47:51.606199+0000 mgr.y (mgr.44107) 310 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:53 vm08 bash[46122]: audit 2026-03-09T18:47:51.606199+0000 mgr.y (mgr.44107) 310 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:47:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:53 vm08 bash[46122]: cluster 2026-03-09T18:47:52.100468+0000 mgr.y (mgr.44107) 311 : cluster [DBG] pgmap v167: 161 pgs: 41 active+undersized, 27 active+undersized+degraded, 93 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 83/627 objects degraded (13.238%) 2026-03-09T18:47:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:53 vm08 bash[46122]: cluster 2026-03-09T18:47:52.100468+0000 mgr.y (mgr.44107) 311 : cluster [DBG] pgmap v167: 161 pgs: 41 active+undersized, 27 active+undersized+degraded, 93 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 83/627 objects degraded (13.238%) 2026-03-09T18:47:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:53 vm08 bash[46122]: cluster 2026-03-09T18:47:52.300884+0000 mon.a (mon.0) 481 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 5 pgs peering) 2026-03-09T18:47:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:53 vm08 bash[46122]: cluster 2026-03-09T18:47:52.300884+0000 mon.a (mon.0) 481 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 5 pgs peering) 2026-03-09T18:47:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:53 vm08 bash[46122]: cluster 2026-03-09T18:47:52.303129+0000 mon.a (mon.0) 482 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:47:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:53 vm08 bash[46122]: cluster 2026-03-09T18:47:52.303129+0000 mon.a (mon.0) 482 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T18:47:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:53 vm08 bash[46122]: audit 2026-03-09T18:47:52.336345+0000 mon.c (mon.1) 314 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:47:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:53 vm08 bash[46122]: audit 2026-03-09T18:47:52.336345+0000 mon.c (mon.1) 314 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T18:47:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:53 vm08 bash[46122]: cluster 2026-03-09T18:47:52.343625+0000 mon.a (mon.0) 483 : cluster [INF] osd.7 [v2:192.168.123.108:6824/3442441785,v1:192.168.123.108:6825/3442441785] boot 2026-03-09T18:47:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:53 vm08 bash[46122]: cluster 2026-03-09T18:47:52.343625+0000 mon.a (mon.0) 483 : cluster [INF] osd.7 [v2:192.168.123.108:6824/3442441785,v1:192.168.123.108:6825/3442441785] boot 2026-03-09T18:47:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:53 vm08 bash[46122]: cluster 2026-03-09T18:47:52.343765+0000 mon.a (mon.0) 484 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-09T18:47:53.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:53 vm08 bash[46122]: cluster 2026-03-09T18:47:52.343765+0000 mon.a (mon.0) 484 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-09T18:47:53.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:53 vm08 bash[46122]: audit 2026-03-09T18:47:53.210236+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:53.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:53 vm08 bash[46122]: audit 2026-03-09T18:47:53.210236+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:53.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:53 vm08 bash[46122]: audit 2026-03-09T18:47:53.216626+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:53.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:53 vm08 bash[46122]: audit 2026-03-09T18:47:53.216626+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:54.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:54 vm00 bash[65531]: cluster 2026-03-09T18:47:52.151279+0000 osd.7 (osd.7) 1 : cluster [WRN] OSD bench result of 35045.710354 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T18:47:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:54 vm00 bash[65531]: cluster 2026-03-09T18:47:52.151279+0000 osd.7 (osd.7) 1 : cluster [WRN] OSD bench result of 35045.710354 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T18:47:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:54 vm00 bash[65531]: cluster 2026-03-09T18:47:53.331430+0000 mon.a (mon.0) 487 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-09T18:47:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:54 vm00 bash[65531]: cluster 2026-03-09T18:47:53.331430+0000 mon.a (mon.0) 487 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-09T18:47:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:54 vm00 bash[65531]: audit 2026-03-09T18:47:53.786954+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:54 vm00 bash[65531]: audit 2026-03-09T18:47:53.786954+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:54 vm00 bash[65531]: audit 2026-03-09T18:47:53.795084+0000 mon.a (mon.0) 489 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:54 vm00 bash[65531]: audit 2026-03-09T18:47:53.795084+0000 mon.a (mon.0) 489 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:54 vm00 bash[65531]: cluster 2026-03-09T18:47:54.178182+0000 mon.a (mon.0) 490 : cluster [WRN] Health check update: Degraded data redundancy: 83/627 objects degraded (13.238%), 27 pgs degraded (PG_DEGRADED) 2026-03-09T18:47:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:54 vm00 bash[65531]: cluster 2026-03-09T18:47:54.178182+0000 mon.a (mon.0) 490 : cluster [WRN] Health check update: Degraded data redundancy: 83/627 objects degraded (13.238%), 27 pgs degraded (PG_DEGRADED) 2026-03-09T18:47:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:54 vm00 bash[69512]: cluster 2026-03-09T18:47:52.151279+0000 osd.7 (osd.7) 1 : cluster [WRN] OSD bench result of 35045.710354 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T18:47:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:54 vm00 bash[69512]: cluster 2026-03-09T18:47:52.151279+0000 osd.7 (osd.7) 1 : cluster [WRN] OSD bench result of 35045.710354 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T18:47:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:54 vm00 bash[69512]: cluster 2026-03-09T18:47:53.331430+0000 mon.a (mon.0) 487 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-09T18:47:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:54 vm00 bash[69512]: cluster 2026-03-09T18:47:53.331430+0000 mon.a (mon.0) 487 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-09T18:47:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:54 vm00 bash[69512]: audit 2026-03-09T18:47:53.786954+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:54 vm00 bash[69512]: audit 2026-03-09T18:47:53.786954+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:54 vm00 bash[69512]: audit 2026-03-09T18:47:53.795084+0000 mon.a (mon.0) 489 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:54 vm00 bash[69512]: audit 2026-03-09T18:47:53.795084+0000 mon.a (mon.0) 489 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:54 vm00 bash[69512]: cluster 2026-03-09T18:47:54.178182+0000 mon.a (mon.0) 490 : cluster [WRN] Health check update: Degraded data redundancy: 83/627 objects degraded (13.238%), 27 pgs degraded (PG_DEGRADED) 2026-03-09T18:47:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:54 vm00 bash[69512]: cluster 2026-03-09T18:47:54.178182+0000 mon.a (mon.0) 490 : cluster [WRN] Health check update: Degraded data redundancy: 83/627 objects degraded (13.238%), 27 pgs degraded (PG_DEGRADED) 2026-03-09T18:47:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:54 vm08 bash[46122]: cluster 2026-03-09T18:47:52.151279+0000 osd.7 (osd.7) 1 : cluster [WRN] OSD bench result of 35045.710354 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T18:47:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:54 vm08 bash[46122]: cluster 2026-03-09T18:47:52.151279+0000 osd.7 (osd.7) 1 : cluster [WRN] OSD bench result of 35045.710354 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T18:47:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:54 vm08 bash[46122]: cluster 2026-03-09T18:47:53.331430+0000 mon.a (mon.0) 487 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-09T18:47:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:54 vm08 bash[46122]: cluster 2026-03-09T18:47:53.331430+0000 mon.a (mon.0) 487 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-09T18:47:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:54 vm08 bash[46122]: audit 2026-03-09T18:47:53.786954+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:54 vm08 bash[46122]: audit 2026-03-09T18:47:53.786954+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:54 vm08 bash[46122]: audit 2026-03-09T18:47:53.795084+0000 mon.a (mon.0) 489 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:54 vm08 bash[46122]: audit 2026-03-09T18:47:53.795084+0000 mon.a (mon.0) 489 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:54 vm08 bash[46122]: cluster 2026-03-09T18:47:54.178182+0000 mon.a (mon.0) 490 : cluster [WRN] Health check update: Degraded data redundancy: 83/627 objects degraded (13.238%), 27 pgs degraded (PG_DEGRADED) 2026-03-09T18:47:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:54 vm08 bash[46122]: cluster 2026-03-09T18:47:54.178182+0000 mon.a (mon.0) 490 : cluster [WRN] Health check update: Degraded data redundancy: 83/627 objects degraded (13.238%), 27 pgs degraded (PG_DEGRADED) 2026-03-09T18:47:55.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:55 vm00 bash[65531]: cluster 2026-03-09T18:47:54.100781+0000 mgr.y (mgr.44107) 312 : cluster [DBG] pgmap v170: 161 pgs: 4 peering, 38 active+undersized, 26 active+undersized+degraded, 93 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 82/627 objects degraded (13.078%) 2026-03-09T18:47:55.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:55 vm00 bash[65531]: cluster 2026-03-09T18:47:54.100781+0000 mgr.y (mgr.44107) 312 : cluster [DBG] pgmap v170: 161 pgs: 4 peering, 38 active+undersized, 26 active+undersized+degraded, 93 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 82/627 objects degraded (13.078%) 2026-03-09T18:47:55.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:55 vm00 bash[69512]: cluster 2026-03-09T18:47:54.100781+0000 mgr.y (mgr.44107) 312 : cluster [DBG] pgmap v170: 161 pgs: 4 peering, 38 active+undersized, 26 active+undersized+degraded, 93 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 82/627 objects degraded (13.078%) 2026-03-09T18:47:55.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:55 vm00 bash[69512]: cluster 2026-03-09T18:47:54.100781+0000 mgr.y (mgr.44107) 312 : cluster [DBG] pgmap v170: 161 pgs: 4 peering, 38 active+undersized, 26 active+undersized+degraded, 93 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 82/627 objects degraded (13.078%) 2026-03-09T18:47:55.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:55 vm08 bash[46122]: cluster 2026-03-09T18:47:54.100781+0000 mgr.y (mgr.44107) 312 : cluster [DBG] pgmap v170: 161 pgs: 4 peering, 38 active+undersized, 26 active+undersized+degraded, 93 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 82/627 objects degraded (13.078%) 2026-03-09T18:47:55.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:55 vm08 bash[46122]: cluster 2026-03-09T18:47:54.100781+0000 mgr.y (mgr.44107) 312 : cluster [DBG] pgmap v170: 161 pgs: 4 peering, 38 active+undersized, 26 active+undersized+degraded, 93 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 82/627 objects degraded (13.078%) 2026-03-09T18:47:57.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:57 vm00 bash[65531]: cluster 2026-03-09T18:47:56.101198+0000 mgr.y (mgr.44107) 313 : cluster [DBG] pgmap v171: 161 pgs: 4 peering, 18 active+undersized, 15 active+undersized+degraded, 124 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s; 55/627 objects degraded (8.772%) 2026-03-09T18:47:57.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:57 vm00 bash[65531]: cluster 2026-03-09T18:47:56.101198+0000 mgr.y (mgr.44107) 313 : cluster [DBG] pgmap v171: 161 pgs: 4 peering, 18 active+undersized, 15 active+undersized+degraded, 124 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s; 55/627 objects degraded (8.772%) 2026-03-09T18:47:57.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:57 vm00 bash[69512]: cluster 2026-03-09T18:47:56.101198+0000 mgr.y (mgr.44107) 313 : cluster [DBG] pgmap v171: 161 pgs: 4 peering, 18 active+undersized, 15 active+undersized+degraded, 124 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s; 55/627 objects degraded (8.772%) 2026-03-09T18:47:57.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:57 vm00 bash[69512]: cluster 2026-03-09T18:47:56.101198+0000 mgr.y (mgr.44107) 313 : cluster [DBG] pgmap v171: 161 pgs: 4 peering, 18 active+undersized, 15 active+undersized+degraded, 124 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s; 55/627 objects degraded (8.772%) 2026-03-09T18:47:57.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:57 vm08 bash[46122]: cluster 2026-03-09T18:47:56.101198+0000 mgr.y (mgr.44107) 313 : cluster [DBG] pgmap v171: 161 pgs: 4 peering, 18 active+undersized, 15 active+undersized+degraded, 124 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s; 55/627 objects degraded (8.772%) 2026-03-09T18:47:57.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:57 vm08 bash[46122]: cluster 2026-03-09T18:47:56.101198+0000 mgr.y (mgr.44107) 313 : cluster [DBG] pgmap v171: 161 pgs: 4 peering, 18 active+undersized, 15 active+undersized+degraded, 124 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s; 55/627 objects degraded (8.772%) 2026-03-09T18:47:58.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:58 vm08 bash[46122]: cluster 2026-03-09T18:47:58.362628+0000 mon.a (mon.0) 491 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 55/627 objects degraded (8.772%), 15 pgs degraded) 2026-03-09T18:47:58.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:58 vm08 bash[46122]: cluster 2026-03-09T18:47:58.362628+0000 mon.a (mon.0) 491 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 55/627 objects degraded (8.772%), 15 pgs degraded) 2026-03-09T18:47:58.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:58 vm00 bash[65531]: cluster 2026-03-09T18:47:58.362628+0000 mon.a (mon.0) 491 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 55/627 objects degraded (8.772%), 15 pgs degraded) 2026-03-09T18:47:58.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:58 vm00 bash[65531]: cluster 2026-03-09T18:47:58.362628+0000 mon.a (mon.0) 491 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 55/627 objects degraded (8.772%), 15 pgs degraded) 2026-03-09T18:47:58.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:58 vm00 bash[69512]: cluster 2026-03-09T18:47:58.362628+0000 mon.a (mon.0) 491 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 55/627 objects degraded (8.772%), 15 pgs degraded) 2026-03-09T18:47:58.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:58 vm00 bash[69512]: cluster 2026-03-09T18:47:58.362628+0000 mon.a (mon.0) 491 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 55/627 objects degraded (8.772%), 15 pgs degraded) 2026-03-09T18:47:59.566 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:59 vm08 bash[46122]: cluster 2026-03-09T18:47:58.101617+0000 mgr.y (mgr.44107) 314 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 752 B/s rd, 0 op/s 2026-03-09T18:47:59.566 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:59 vm08 bash[46122]: cluster 2026-03-09T18:47:58.101617+0000 mgr.y (mgr.44107) 314 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 752 B/s rd, 0 op/s 2026-03-09T18:47:59.566 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:59 vm08 bash[46122]: audit 2026-03-09T18:47:59.346224+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:59.566 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:59 vm08 bash[46122]: audit 2026-03-09T18:47:59.346224+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:59.566 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:59 vm08 bash[46122]: audit 2026-03-09T18:47:59.352085+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:59.566 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:59 vm08 bash[46122]: audit 2026-03-09T18:47:59.352085+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:59.566 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:59 vm08 bash[46122]: audit 2026-03-09T18:47:59.354120+0000 mon.c (mon.1) 315 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:59.566 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:59 vm08 bash[46122]: audit 2026-03-09T18:47:59.354120+0000 mon.c (mon.1) 315 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:59.566 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:59 vm08 bash[46122]: audit 2026-03-09T18:47:59.354741+0000 mon.c (mon.1) 316 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:47:59.567 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:59 vm08 bash[46122]: audit 2026-03-09T18:47:59.354741+0000 mon.c (mon.1) 316 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:47:59.567 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:59 vm08 bash[46122]: audit 2026-03-09T18:47:59.358989+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:59.567 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:59 vm08 bash[46122]: audit 2026-03-09T18:47:59.358989+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:59.567 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:59 vm08 bash[46122]: audit 2026-03-09T18:47:59.400842+0000 mon.c (mon.1) 317 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:59.567 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:59 vm08 bash[46122]: audit 2026-03-09T18:47:59.400842+0000 mon.c (mon.1) 317 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:59.567 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:59 vm08 bash[46122]: audit 2026-03-09T18:47:59.402633+0000 mon.c (mon.1) 318 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:59.567 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:59 vm08 bash[46122]: audit 2026-03-09T18:47:59.402633+0000 mon.c (mon.1) 318 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:59.567 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:59 vm08 bash[46122]: audit 2026-03-09T18:47:59.404109+0000 mon.c (mon.1) 319 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:59.567 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:59 vm08 bash[46122]: audit 2026-03-09T18:47:59.404109+0000 mon.c (mon.1) 319 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:59.567 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:59 vm08 bash[46122]: audit 2026-03-09T18:47:59.405299+0000 mon.c (mon.1) 320 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:59.567 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:59 vm08 bash[46122]: audit 2026-03-09T18:47:59.405299+0000 mon.c (mon.1) 320 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:59.567 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:59 vm08 bash[46122]: audit 2026-03-09T18:47:59.406964+0000 mon.c (mon.1) 321 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:59.567 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:47:59 vm08 bash[46122]: audit 2026-03-09T18:47:59.406964+0000 mon.c (mon.1) 321 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:59 vm00 bash[65531]: cluster 2026-03-09T18:47:58.101617+0000 mgr.y (mgr.44107) 314 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 752 B/s rd, 0 op/s 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:59 vm00 bash[65531]: cluster 2026-03-09T18:47:58.101617+0000 mgr.y (mgr.44107) 314 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 752 B/s rd, 0 op/s 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:59 vm00 bash[65531]: audit 2026-03-09T18:47:59.346224+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:59 vm00 bash[65531]: audit 2026-03-09T18:47:59.346224+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:59 vm00 bash[65531]: audit 2026-03-09T18:47:59.352085+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:59 vm00 bash[65531]: audit 2026-03-09T18:47:59.352085+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:59 vm00 bash[65531]: audit 2026-03-09T18:47:59.354120+0000 mon.c (mon.1) 315 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:59 vm00 bash[65531]: audit 2026-03-09T18:47:59.354120+0000 mon.c (mon.1) 315 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:59 vm00 bash[65531]: audit 2026-03-09T18:47:59.354741+0000 mon.c (mon.1) 316 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:59 vm00 bash[65531]: audit 2026-03-09T18:47:59.354741+0000 mon.c (mon.1) 316 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:59 vm00 bash[65531]: audit 2026-03-09T18:47:59.358989+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:59 vm00 bash[65531]: audit 2026-03-09T18:47:59.358989+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:59 vm00 bash[65531]: audit 2026-03-09T18:47:59.400842+0000 mon.c (mon.1) 317 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:59 vm00 bash[65531]: audit 2026-03-09T18:47:59.400842+0000 mon.c (mon.1) 317 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:59 vm00 bash[65531]: audit 2026-03-09T18:47:59.402633+0000 mon.c (mon.1) 318 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:59 vm00 bash[65531]: audit 2026-03-09T18:47:59.402633+0000 mon.c (mon.1) 318 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:59 vm00 bash[65531]: audit 2026-03-09T18:47:59.404109+0000 mon.c (mon.1) 319 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:59 vm00 bash[65531]: audit 2026-03-09T18:47:59.404109+0000 mon.c (mon.1) 319 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:59 vm00 bash[65531]: audit 2026-03-09T18:47:59.405299+0000 mon.c (mon.1) 320 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:59 vm00 bash[65531]: audit 2026-03-09T18:47:59.405299+0000 mon.c (mon.1) 320 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:59 vm00 bash[65531]: audit 2026-03-09T18:47:59.406964+0000 mon.c (mon.1) 321 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:47:59 vm00 bash[65531]: audit 2026-03-09T18:47:59.406964+0000 mon.c (mon.1) 321 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:59 vm00 bash[69512]: cluster 2026-03-09T18:47:58.101617+0000 mgr.y (mgr.44107) 314 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 752 B/s rd, 0 op/s 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:59 vm00 bash[69512]: cluster 2026-03-09T18:47:58.101617+0000 mgr.y (mgr.44107) 314 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 752 B/s rd, 0 op/s 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:59 vm00 bash[69512]: audit 2026-03-09T18:47:59.346224+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:59 vm00 bash[69512]: audit 2026-03-09T18:47:59.346224+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:59 vm00 bash[69512]: audit 2026-03-09T18:47:59.352085+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:59 vm00 bash[69512]: audit 2026-03-09T18:47:59.352085+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:59 vm00 bash[69512]: audit 2026-03-09T18:47:59.354120+0000 mon.c (mon.1) 315 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:59 vm00 bash[69512]: audit 2026-03-09T18:47:59.354120+0000 mon.c (mon.1) 315 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:59 vm00 bash[69512]: audit 2026-03-09T18:47:59.354741+0000 mon.c (mon.1) 316 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:59 vm00 bash[69512]: audit 2026-03-09T18:47:59.354741+0000 mon.c (mon.1) 316 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:59 vm00 bash[69512]: audit 2026-03-09T18:47:59.358989+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:59 vm00 bash[69512]: audit 2026-03-09T18:47:59.358989+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:59 vm00 bash[69512]: audit 2026-03-09T18:47:59.400842+0000 mon.c (mon.1) 317 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:59 vm00 bash[69512]: audit 2026-03-09T18:47:59.400842+0000 mon.c (mon.1) 317 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:59 vm00 bash[69512]: audit 2026-03-09T18:47:59.402633+0000 mon.c (mon.1) 318 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:59 vm00 bash[69512]: audit 2026-03-09T18:47:59.402633+0000 mon.c (mon.1) 318 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:59.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:59 vm00 bash[69512]: audit 2026-03-09T18:47:59.404109+0000 mon.c (mon.1) 319 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:59.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:59 vm00 bash[69512]: audit 2026-03-09T18:47:59.404109+0000 mon.c (mon.1) 319 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:59.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:59 vm00 bash[69512]: audit 2026-03-09T18:47:59.405299+0000 mon.c (mon.1) 320 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:59.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:59 vm00 bash[69512]: audit 2026-03-09T18:47:59.405299+0000 mon.c (mon.1) 320 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:59.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:59 vm00 bash[69512]: audit 2026-03-09T18:47:59.406964+0000 mon.c (mon.1) 321 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:59.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:47:59 vm00 bash[69512]: audit 2026-03-09T18:47:59.406964+0000 mon.c (mon.1) 321 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:47:59.880 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:47:59 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:47:59] "GET /metrics HTTP/1.1" 200 37945 "" "Prometheus/2.51.0" 2026-03-09T18:48:00.019 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:48:00.425 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:48:00.425 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (18m) 84s ago 25m 14.5M - 0.25.0 c8568f914cd2 2a8a29ecee54 2026-03-09T18:48:00.425 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (5m) 7s ago 25m 67.1M - 10.4.0 c8b91775d855 5e0e30d27ab2 2026-03-09T18:48:00.425 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (6m) 84s ago 24m 44.2M - 3.5 e1d6a67b021e ff3da66cebe9 2026-03-09T18:48:00.425 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443,9283,8765 running (6m) 7s ago 27m 466M - 19.2.3-678-ge911bdeb 654f31e6858e e51f08afe84e 2026-03-09T18:48:00.425 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:8443,9283,8765 running (15m) 84s ago 28m 530M - 19.2.3-678-ge911bdeb 654f31e6858e 838266ced7b2 2026-03-09T18:48:00.425 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (4m) 84s ago 28m 49.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e eb9fca83668a 2026-03-09T18:48:00.425 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (5m) 7s ago 28m 48.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a343d2673f4 2026-03-09T18:48:00.425 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (4m) 84s ago 28m 46.3M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 5c50cbcbef6b 2026-03-09T18:48:00.425 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (18m) 84s ago 25m 8028k - 1.7.0 72c9c2088986 c2e3e3202fde 2026-03-09T18:48:00.425 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (18m) 7s ago 25m 8055k - 1.7.0 72c9c2088986 7a7d1ed8c801 2026-03-09T18:48:00.425 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (2m) 84s ago 27m 45.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1334681baf1a 2026-03-09T18:48:00.425 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (89s) 84s ago 27m 22.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e b0cddb861a9d 2026-03-09T18:48:00.425 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (3m) 84s ago 27m 45.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9a838e294e64 2026-03-09T18:48:00.425 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (3m) 84s ago 26m 69.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 161fbb574888 2026-03-09T18:48:00.425 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (73s) 7s ago 26m 51.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7575a2bf51cd 2026-03-09T18:48:00.425 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (42s) 7s ago 26m 68.6M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9263a2afad40 2026-03-09T18:48:00.425 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (26s) 7s ago 26m 45.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e b5db37a03fe5 2026-03-09T18:48:00.425 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (12s) 7s ago 25m 22.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9904fad47d23 2026-03-09T18:48:00.425 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (6m) 7s ago 25m 43.5M - 2.51.0 1d3b7f56885b 2dc1789f7bf0 2026-03-09T18:48:00.425 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (24m) 84s ago 24m 89.4M - 17.2.0 e1d6a67b021e 671fa80b7e00 2026-03-09T18:48:00.425 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (24m) 7s ago 24m 90.8M - 17.2.0 e1d6a67b021e 1fbcce983317 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: cephadm 2026-03-09T18:47:59.407803+0000 mgr.y (mgr.44107) 315 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: cephadm 2026-03-09T18:47:59.407803+0000 mgr.y (mgr.44107) 315 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.436950+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.436950+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.441422+0000 mon.c (mon.1) 322 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.441422+0000 mon.c (mon.1) 322 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.441571+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.441571+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.458287+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.458287+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.461717+0000 mon.c (mon.1) 323 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.461717+0000 mon.c (mon.1) 323 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.461893+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.461893+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.464287+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.464287+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.467099+0000 mon.c (mon.1) 324 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.467099+0000 mon.c (mon.1) 324 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.467265+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.467265+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.477120+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.477120+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.480750+0000 mon.c (mon.1) 325 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.480750+0000 mon.c (mon.1) 325 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.480906+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.480906+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.483411+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.483411+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.484758+0000 mon.c (mon.1) 326 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.484758+0000 mon.c (mon.1) 326 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.484960+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.484960+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.487430+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.487430+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.488871+0000 mon.c (mon.1) 327 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.488871+0000 mon.c (mon.1) 327 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.489087+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.489087+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.491577+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-09T18:48:00.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.491577+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-09T18:48:00.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.493037+0000 mon.c (mon.1) 328 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-09T18:48:00.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.493037+0000 mon.c (mon.1) 328 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-09T18:48:00.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.493238+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-09T18:48:00.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.493238+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-09T18:48:00.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.495756+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]': finished 2026-03-09T18:48:00.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.495756+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]': finished 2026-03-09T18:48:00.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.498089+0000 mon.c (mon.1) 329 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-09T18:48:00.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.498089+0000 mon.c (mon.1) 329 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-09T18:48:00.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.498295+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-09T18:48:00.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.498295+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-09T18:48:00.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.500867+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]': finished 2026-03-09T18:48:00.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.500867+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]': finished 2026-03-09T18:48:00.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: cephadm 2026-03-09T18:47:59.503015+0000 mgr.y (mgr.44107) 316 : cephadm [INF] Upgrade: Setting require_osd_release to 19 squid 2026-03-09T18:48:00.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: cephadm 2026-03-09T18:47:59.503015+0000 mgr.y (mgr.44107) 316 : cephadm [INF] Upgrade: Setting require_osd_release to 19 squid 2026-03-09T18:48:00.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.503173+0000 mon.c (mon.1) 330 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-09T18:48:00.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.503173+0000 mon.c (mon.1) 330 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-09T18:48:00.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.503380+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-09T18:48:00.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:47:59.503380+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-09T18:48:00.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:48:00.010944+0000 mgr.y (mgr.44107) 317 : audit [DBG] from='client.54447 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:00.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:48:00.010944+0000 mgr.y (mgr.44107) 317 : audit [DBG] from='client.54447 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:00.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: cluster 2026-03-09T18:48:00.102005+0000 mgr.y (mgr.44107) 318 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:48:00.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: cluster 2026-03-09T18:48:00.102005+0000 mgr.y (mgr.44107) 318 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:48:00.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:48:00.211735+0000 mgr.y (mgr.44107) 319 : audit [DBG] from='client.54453 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:00.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:48:00.211735+0000 mgr.y (mgr.44107) 319 : audit [DBG] from='client.54453 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:00.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:48:00.424785+0000 mgr.y (mgr.44107) 320 : audit [DBG] from='client.44467 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:00.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:00 vm08 bash[46122]: audit 2026-03-09T18:48:00.424785+0000 mgr.y (mgr.44107) 320 : audit [DBG] from='client.44467 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:00.759 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:48:00.759 INFO:teuthology.orchestra.run.vm00.stdout: "mon": { 2026-03-09T18:48:00.759 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-09T18:48:00.759 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:48:00.759 INFO:teuthology.orchestra.run.vm00.stdout: "mgr": { 2026-03-09T18:48:00.759 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:48:00.759 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:48:00.759 INFO:teuthology.orchestra.run.vm00.stdout: "osd": { 2026-03-09T18:48:00.759 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 8 2026-03-09T18:48:00.759 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:48:00.759 INFO:teuthology.orchestra.run.vm00.stdout: "rgw": { 2026-03-09T18:48:00.759 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-09T18:48:00.759 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:48:00.759 INFO:teuthology.orchestra.run.vm00.stdout: "overall": { 2026-03-09T18:48:00.759 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2, 2026-03-09T18:48:00.759 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 13 2026-03-09T18:48:00.759 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:48:00.759 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:48:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: cephadm 2026-03-09T18:47:59.407803+0000 mgr.y (mgr.44107) 315 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-09T18:48:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: cephadm 2026-03-09T18:47:59.407803+0000 mgr.y (mgr.44107) 315 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-09T18:48:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.436950+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.436950+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.441422+0000 mon.c (mon.1) 322 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-09T18:48:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.441422+0000 mon.c (mon.1) 322 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-09T18:48:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.441571+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-09T18:48:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.441571+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-09T18:48:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.458287+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-09T18:48:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.458287+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-09T18:48:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.461717+0000 mon.c (mon.1) 323 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-09T18:48:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.461717+0000 mon.c (mon.1) 323 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-09T18:48:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.461893+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-09T18:48:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.461893+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.464287+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.464287+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.467099+0000 mon.c (mon.1) 324 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.467099+0000 mon.c (mon.1) 324 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.467265+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.467265+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.477120+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.477120+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.480750+0000 mon.c (mon.1) 325 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.480750+0000 mon.c (mon.1) 325 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.480906+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.480906+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.483411+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.483411+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.484758+0000 mon.c (mon.1) 326 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.484758+0000 mon.c (mon.1) 326 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.484960+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.484960+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.487430+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.487430+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.488871+0000 mon.c (mon.1) 327 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.488871+0000 mon.c (mon.1) 327 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.489087+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.489087+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.491577+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.491577+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.493037+0000 mon.c (mon.1) 328 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.493037+0000 mon.c (mon.1) 328 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.493238+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.493238+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.495756+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]': finished 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.495756+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]': finished 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.498089+0000 mon.c (mon.1) 329 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.498089+0000 mon.c (mon.1) 329 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.498295+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.498295+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.500867+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]': finished 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.500867+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]': finished 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: cephadm 2026-03-09T18:47:59.503015+0000 mgr.y (mgr.44107) 316 : cephadm [INF] Upgrade: Setting require_osd_release to 19 squid 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: cephadm 2026-03-09T18:47:59.503015+0000 mgr.y (mgr.44107) 316 : cephadm [INF] Upgrade: Setting require_osd_release to 19 squid 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.503173+0000 mon.c (mon.1) 330 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.503173+0000 mon.c (mon.1) 330 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-09T18:48:00.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.503380+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:47:59.503380+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:48:00.010944+0000 mgr.y (mgr.44107) 317 : audit [DBG] from='client.54447 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:48:00.010944+0000 mgr.y (mgr.44107) 317 : audit [DBG] from='client.54447 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: cluster 2026-03-09T18:48:00.102005+0000 mgr.y (mgr.44107) 318 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: cluster 2026-03-09T18:48:00.102005+0000 mgr.y (mgr.44107) 318 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:48:00.211735+0000 mgr.y (mgr.44107) 319 : audit [DBG] from='client.54453 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:48:00.211735+0000 mgr.y (mgr.44107) 319 : audit [DBG] from='client.54453 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:48:00.424785+0000 mgr.y (mgr.44107) 320 : audit [DBG] from='client.44467 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:00 vm00 bash[65531]: audit 2026-03-09T18:48:00.424785+0000 mgr.y (mgr.44107) 320 : audit [DBG] from='client.44467 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: cephadm 2026-03-09T18:47:59.407803+0000 mgr.y (mgr.44107) 315 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: cephadm 2026-03-09T18:47:59.407803+0000 mgr.y (mgr.44107) 315 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.436950+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.436950+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.441422+0000 mon.c (mon.1) 322 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.441422+0000 mon.c (mon.1) 322 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.441571+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.441571+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.458287+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.458287+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.461717+0000 mon.c (mon.1) 323 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.461717+0000 mon.c (mon.1) 323 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.461893+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.461893+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.464287+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.464287+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.467099+0000 mon.c (mon.1) 324 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.467099+0000 mon.c (mon.1) 324 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.467265+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.467265+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.477120+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.477120+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.480750+0000 mon.c (mon.1) 325 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.480750+0000 mon.c (mon.1) 325 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.480906+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.480906+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.483411+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-09T18:48:00.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.483411+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.484758+0000 mon.c (mon.1) 326 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.484758+0000 mon.c (mon.1) 326 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.484960+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.484960+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.487430+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.487430+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.488871+0000 mon.c (mon.1) 327 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.488871+0000 mon.c (mon.1) 327 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.489087+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.489087+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.491577+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.491577+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.493037+0000 mon.c (mon.1) 328 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.493037+0000 mon.c (mon.1) 328 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.493238+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.493238+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.495756+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]': finished 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.495756+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]': finished 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.498089+0000 mon.c (mon.1) 329 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.498089+0000 mon.c (mon.1) 329 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.498295+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.498295+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.500867+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]': finished 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.500867+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]': finished 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: cephadm 2026-03-09T18:47:59.503015+0000 mgr.y (mgr.44107) 316 : cephadm [INF] Upgrade: Setting require_osd_release to 19 squid 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: cephadm 2026-03-09T18:47:59.503015+0000 mgr.y (mgr.44107) 316 : cephadm [INF] Upgrade: Setting require_osd_release to 19 squid 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.503173+0000 mon.c (mon.1) 330 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.503173+0000 mon.c (mon.1) 330 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.503380+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:47:59.503380+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:48:00.010944+0000 mgr.y (mgr.44107) 317 : audit [DBG] from='client.54447 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:48:00.010944+0000 mgr.y (mgr.44107) 317 : audit [DBG] from='client.54447 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: cluster 2026-03-09T18:48:00.102005+0000 mgr.y (mgr.44107) 318 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: cluster 2026-03-09T18:48:00.102005+0000 mgr.y (mgr.44107) 318 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:48:00.211735+0000 mgr.y (mgr.44107) 319 : audit [DBG] from='client.54453 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:48:00.211735+0000 mgr.y (mgr.44107) 319 : audit [DBG] from='client.54453 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:48:00.424785+0000 mgr.y (mgr.44107) 320 : audit [DBG] from='client.44467 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:00.882 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:00 vm00 bash[69512]: audit 2026-03-09T18:48:00.424785+0000 mgr.y (mgr.44107) 320 : audit [DBG] from='client.44467 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:00.955 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:48:00.955 INFO:teuthology.orchestra.run.vm00.stdout: "target_image": null, 2026-03-09T18:48:00.955 INFO:teuthology.orchestra.run.vm00.stdout: "in_progress": false, 2026-03-09T18:48:00.955 INFO:teuthology.orchestra.run.vm00.stdout: "which": "", 2026-03-09T18:48:00.955 INFO:teuthology.orchestra.run.vm00.stdout: "services_complete": [], 2026-03-09T18:48:00.955 INFO:teuthology.orchestra.run.vm00.stdout: "progress": null, 2026-03-09T18:48:00.955 INFO:teuthology.orchestra.run.vm00.stdout: "message": "", 2026-03-09T18:48:00.955 INFO:teuthology.orchestra.run.vm00.stdout: "is_paused": false 2026-03-09T18:48:00.955 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:48:01.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: cluster 2026-03-09T18:48:00.500643+0000 mon.a (mon.0) 513 : cluster [INF] Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-09T18:48:01.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: cluster 2026-03-09T18:48:00.500643+0000 mon.a (mon.0) 513 : cluster [INF] Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: cluster 2026-03-09T18:48:00.500661+0000 mon.a (mon.0) 514 : cluster [INF] Cluster is now healthy 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: cluster 2026-03-09T18:48:00.500661+0000 mon.a (mon.0) 514 : cluster [INF] Cluster is now healthy 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.506809+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.506809+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: cluster 2026-03-09T18:48:00.510101+0000 mon.a (mon.0) 516 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: cluster 2026-03-09T18:48:00.510101+0000 mon.a (mon.0) 516 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.511153+0000 mon.c (mon.1) 331 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.511153+0000 mon.c (mon.1) 331 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: cephadm 2026-03-09T18:48:00.512030+0000 mgr.y (mgr.44107) 321 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: cephadm 2026-03-09T18:48:00.512030+0000 mgr.y (mgr.44107) 321 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.517984+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.517984+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.519723+0000 mon.c (mon.1) 332 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.519723+0000 mon.c (mon.1) 332 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: cephadm 2026-03-09T18:48:00.520293+0000 mgr.y (mgr.44107) 322 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: cephadm 2026-03-09T18:48:00.520293+0000 mgr.y (mgr.44107) 322 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.524334+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.524334+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.528510+0000 mon.c (mon.1) 333 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.528510+0000 mon.c (mon.1) 333 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.529483+0000 mon.c (mon.1) 334 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.529483+0000 mon.c (mon.1) 334 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: cephadm 2026-03-09T18:48:00.529937+0000 mgr.y (mgr.44107) 323 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: cephadm 2026-03-09T18:48:00.529937+0000 mgr.y (mgr.44107) 323 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.536359+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.536359+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.538229+0000 mon.c (mon.1) 335 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.538229+0000 mon.c (mon.1) 335 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: cephadm 2026-03-09T18:48:00.538768+0000 mgr.y (mgr.44107) 324 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: cephadm 2026-03-09T18:48:00.538768+0000 mgr.y (mgr.44107) 324 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.542019+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.542019+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.544058+0000 mon.c (mon.1) 336 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.544058+0000 mon.c (mon.1) 336 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: cephadm 2026-03-09T18:48:00.544599+0000 mgr.y (mgr.44107) 325 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: cephadm 2026-03-09T18:48:00.544599+0000 mgr.y (mgr.44107) 325 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.547568+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.547568+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.549696+0000 mon.c (mon.1) 337 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.549696+0000 mon.c (mon.1) 337 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.550644+0000 mon.c (mon.1) 338 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.550644+0000 mon.c (mon.1) 338 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.551596+0000 mon.c (mon.1) 339 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.551596+0000 mon.c (mon.1) 339 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.552538+0000 mon.c (mon.1) 340 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.552538+0000 mon.c (mon.1) 340 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.553414+0000 mon.c (mon.1) 341 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.553414+0000 mon.c (mon.1) 341 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.554401+0000 mon.c (mon.1) 342 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.554401+0000 mon.c (mon.1) 342 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: cephadm 2026-03-09T18:48:00.555183+0000 mgr.y (mgr.44107) 326 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: cephadm 2026-03-09T18:48:00.555183+0000 mgr.y (mgr.44107) 326 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.556000+0000 mon.c (mon.1) 343 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.556000+0000 mon.c (mon.1) 343 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.556231+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.556231+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.559144+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.559144+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.560575+0000 mon.c (mon.1) 344 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.560575+0000 mon.c (mon.1) 344 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.560819+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.560819+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.563578+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.563578+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.564971+0000 mon.c (mon.1) 345 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.564971+0000 mon.c (mon.1) 345 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.565187+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.565187+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.575148+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.575148+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.578926+0000 mon.c (mon.1) 346 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.578926+0000 mon.c (mon.1) 346 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.579295+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.579295+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.581960+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.581960+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.586920+0000 mon.c (mon.1) 347 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:48:01.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.586920+0000 mon.c (mon.1) 347 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.587354+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.587354+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.590120+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.590120+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.594668+0000 mon.c (mon.1) 348 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.594668+0000 mon.c (mon.1) 348 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.594943+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.594943+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.596394+0000 mon.c (mon.1) 349 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.596394+0000 mon.c (mon.1) 349 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.596594+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.596594+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.599597+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.599597+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.604129+0000 mon.c (mon.1) 350 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.604129+0000 mon.c (mon.1) 350 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.604357+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.604357+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.605443+0000 mon.c (mon.1) 351 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.605443+0000 mon.c (mon.1) 351 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.605641+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.605641+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.608468+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.608468+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.613236+0000 mon.c (mon.1) 352 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.613236+0000 mon.c (mon.1) 352 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.613494+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.613494+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.614562+0000 mon.c (mon.1) 353 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.614562+0000 mon.c (mon.1) 353 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.614760+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.614760+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.618589+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.618589+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.623709+0000 mon.c (mon.1) 354 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.623709+0000 mon.c (mon.1) 354 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.623943+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.623943+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.626507+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.626507+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.630801+0000 mon.c (mon.1) 355 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.630801+0000 mon.c (mon.1) 355 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.631013+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.631013+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.632049+0000 mon.c (mon.1) 356 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.632049+0000 mon.c (mon.1) 356 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.632246+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.632246+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.633281+0000 mon.c (mon.1) 357 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.633281+0000 mon.c (mon.1) 357 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.633475+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.633475+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.634472+0000 mon.c (mon.1) 358 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.634472+0000 mon.c (mon.1) 358 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.634654+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.634654+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.635877+0000 mon.c (mon.1) 359 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.635877+0000 mon.c (mon.1) 359 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.636078+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.636078+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.637115+0000 mon.c (mon.1) 360 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.637115+0000 mon.c (mon.1) 360 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.637324+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.637324+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: cephadm 2026-03-09T18:48:00.638211+0000 mgr.y (mgr.44107) 327 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: cephadm 2026-03-09T18:48:00.638211+0000 mgr.y (mgr.44107) 327 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.638611+0000 mon.c (mon.1) 361 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.638611+0000 mon.c (mon.1) 361 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.638803+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.638803+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.642027+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.642027+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.645211+0000 mon.c (mon.1) 362 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.645211+0000 mon.c (mon.1) 362 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.646807+0000 mon.c (mon.1) 363 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.646807+0000 mon.c (mon.1) 363 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.647815+0000 mon.c (mon.1) 364 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.647815+0000 mon.c (mon.1) 364 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.652671+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.652671+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.699155+0000 mon.c (mon.1) 365 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.699155+0000 mon.c (mon.1) 365 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.700832+0000 mon.c (mon.1) 366 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.700832+0000 mon.c (mon.1) 366 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.701789+0000 mon.c (mon.1) 367 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.701789+0000 mon.c (mon.1) 367 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.707011+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.707011+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.762101+0000 mon.c (mon.1) 368 : audit [DBG] from='client.? 192.168.123.100:0/3928481967' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.762101+0000 mon.c (mon.1) 368 : audit [DBG] from='client.? 192.168.123.100:0/3928481967' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.958331+0000 mgr.y (mgr.44107) 328 : audit [DBG] from='client.44479 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:01.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:01 vm08 bash[46122]: audit 2026-03-09T18:48:00.958331+0000 mgr.y (mgr.44107) 328 : audit [DBG] from='client.44479 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: cluster 2026-03-09T18:48:00.500643+0000 mon.a (mon.0) 513 : cluster [INF] Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: cluster 2026-03-09T18:48:00.500643+0000 mon.a (mon.0) 513 : cluster [INF] Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: cluster 2026-03-09T18:48:00.500661+0000 mon.a (mon.0) 514 : cluster [INF] Cluster is now healthy 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: cluster 2026-03-09T18:48:00.500661+0000 mon.a (mon.0) 514 : cluster [INF] Cluster is now healthy 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.506809+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.506809+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: cluster 2026-03-09T18:48:00.510101+0000 mon.a (mon.0) 516 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: cluster 2026-03-09T18:48:00.510101+0000 mon.a (mon.0) 516 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.511153+0000 mon.c (mon.1) 331 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.511153+0000 mon.c (mon.1) 331 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: cephadm 2026-03-09T18:48:00.512030+0000 mgr.y (mgr.44107) 321 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: cephadm 2026-03-09T18:48:00.512030+0000 mgr.y (mgr.44107) 321 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.517984+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.517984+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.519723+0000 mon.c (mon.1) 332 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.519723+0000 mon.c (mon.1) 332 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: cephadm 2026-03-09T18:48:00.520293+0000 mgr.y (mgr.44107) 322 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: cephadm 2026-03-09T18:48:00.520293+0000 mgr.y (mgr.44107) 322 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.524334+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.524334+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.528510+0000 mon.c (mon.1) 333 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.528510+0000 mon.c (mon.1) 333 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.529483+0000 mon.c (mon.1) 334 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.529483+0000 mon.c (mon.1) 334 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: cephadm 2026-03-09T18:48:00.529937+0000 mgr.y (mgr.44107) 323 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: cephadm 2026-03-09T18:48:00.529937+0000 mgr.y (mgr.44107) 323 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.536359+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.536359+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.538229+0000 mon.c (mon.1) 335 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.538229+0000 mon.c (mon.1) 335 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: cephadm 2026-03-09T18:48:00.538768+0000 mgr.y (mgr.44107) 324 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: cephadm 2026-03-09T18:48:00.538768+0000 mgr.y (mgr.44107) 324 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.542019+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.542019+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.544058+0000 mon.c (mon.1) 336 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.544058+0000 mon.c (mon.1) 336 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: cephadm 2026-03-09T18:48:00.544599+0000 mgr.y (mgr.44107) 325 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: cephadm 2026-03-09T18:48:00.544599+0000 mgr.y (mgr.44107) 325 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.547568+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.547568+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.549696+0000 mon.c (mon.1) 337 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.549696+0000 mon.c (mon.1) 337 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.550644+0000 mon.c (mon.1) 338 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.550644+0000 mon.c (mon.1) 338 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.551596+0000 mon.c (mon.1) 339 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.551596+0000 mon.c (mon.1) 339 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.552538+0000 mon.c (mon.1) 340 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.552538+0000 mon.c (mon.1) 340 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.553414+0000 mon.c (mon.1) 341 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.553414+0000 mon.c (mon.1) 341 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.554401+0000 mon.c (mon.1) 342 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.554401+0000 mon.c (mon.1) 342 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: cephadm 2026-03-09T18:48:00.555183+0000 mgr.y (mgr.44107) 326 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: cephadm 2026-03-09T18:48:00.555183+0000 mgr.y (mgr.44107) 326 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.556000+0000 mon.c (mon.1) 343 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.556000+0000 mon.c (mon.1) 343 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.556231+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.556231+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.559144+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.559144+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.560575+0000 mon.c (mon.1) 344 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.560575+0000 mon.c (mon.1) 344 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.560819+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.560819+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.563578+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.563578+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.564971+0000 mon.c (mon.1) 345 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.564971+0000 mon.c (mon.1) 345 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.565187+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.565187+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.575148+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.575148+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.578926+0000 mon.c (mon.1) 346 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.578926+0000 mon.c (mon.1) 346 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.579295+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.579295+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.581960+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.581960+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.586920+0000 mon.c (mon.1) 347 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.586920+0000 mon.c (mon.1) 347 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.587354+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.587354+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.590120+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.590120+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.594668+0000 mon.c (mon.1) 348 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.594668+0000 mon.c (mon.1) 348 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.594943+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.594943+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.596394+0000 mon.c (mon.1) 349 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.596394+0000 mon.c (mon.1) 349 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.596594+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.596594+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.599597+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.599597+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.604129+0000 mon.c (mon.1) 350 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.604129+0000 mon.c (mon.1) 350 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.604357+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.604357+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.605443+0000 mon.c (mon.1) 351 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.605443+0000 mon.c (mon.1) 351 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.605641+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.605641+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.608468+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.608468+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.613236+0000 mon.c (mon.1) 352 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.613236+0000 mon.c (mon.1) 352 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:48:01.881 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.613494+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.613494+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.614562+0000 mon.c (mon.1) 353 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.614562+0000 mon.c (mon.1) 353 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.614760+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.614760+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.618589+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.618589+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.623709+0000 mon.c (mon.1) 354 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.623709+0000 mon.c (mon.1) 354 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.623943+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.623943+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.626507+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.626507+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.630801+0000 mon.c (mon.1) 355 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.630801+0000 mon.c (mon.1) 355 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.631013+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.631013+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.632049+0000 mon.c (mon.1) 356 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.632049+0000 mon.c (mon.1) 356 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.632246+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.632246+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.633281+0000 mon.c (mon.1) 357 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.633281+0000 mon.c (mon.1) 357 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.633475+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.633475+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.634472+0000 mon.c (mon.1) 358 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.634472+0000 mon.c (mon.1) 358 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.634654+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.634654+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.635877+0000 mon.c (mon.1) 359 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.635877+0000 mon.c (mon.1) 359 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.636078+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.636078+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.637115+0000 mon.c (mon.1) 360 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.637115+0000 mon.c (mon.1) 360 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.637324+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.637324+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: cephadm 2026-03-09T18:48:00.638211+0000 mgr.y (mgr.44107) 327 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: cephadm 2026-03-09T18:48:00.638211+0000 mgr.y (mgr.44107) 327 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.638611+0000 mon.c (mon.1) 361 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.638611+0000 mon.c (mon.1) 361 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.638803+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.638803+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.642027+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.642027+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.645211+0000 mon.c (mon.1) 362 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.645211+0000 mon.c (mon.1) 362 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.646807+0000 mon.c (mon.1) 363 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.646807+0000 mon.c (mon.1) 363 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.647815+0000 mon.c (mon.1) 364 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.647815+0000 mon.c (mon.1) 364 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.652671+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.652671+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.699155+0000 mon.c (mon.1) 365 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.699155+0000 mon.c (mon.1) 365 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.700832+0000 mon.c (mon.1) 366 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:01.882 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.700832+0000 mon.c (mon.1) 366 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.701789+0000 mon.c (mon.1) 367 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.701789+0000 mon.c (mon.1) 367 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.707011+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.707011+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.762101+0000 mon.c (mon.1) 368 : audit [DBG] from='client.? 192.168.123.100:0/3928481967' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.762101+0000 mon.c (mon.1) 368 : audit [DBG] from='client.? 192.168.123.100:0/3928481967' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.958331+0000 mgr.y (mgr.44107) 328 : audit [DBG] from='client.44479 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:01 vm00 bash[65531]: audit 2026-03-09T18:48:00.958331+0000 mgr.y (mgr.44107) 328 : audit [DBG] from='client.44479 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: cluster 2026-03-09T18:48:00.500643+0000 mon.a (mon.0) 513 : cluster [INF] Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: cluster 2026-03-09T18:48:00.500643+0000 mon.a (mon.0) 513 : cluster [INF] Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: cluster 2026-03-09T18:48:00.500661+0000 mon.a (mon.0) 514 : cluster [INF] Cluster is now healthy 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: cluster 2026-03-09T18:48:00.500661+0000 mon.a (mon.0) 514 : cluster [INF] Cluster is now healthy 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.506809+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.506809+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: cluster 2026-03-09T18:48:00.510101+0000 mon.a (mon.0) 516 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: cluster 2026-03-09T18:48:00.510101+0000 mon.a (mon.0) 516 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.511153+0000 mon.c (mon.1) 331 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.511153+0000 mon.c (mon.1) 331 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: cephadm 2026-03-09T18:48:00.512030+0000 mgr.y (mgr.44107) 321 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: cephadm 2026-03-09T18:48:00.512030+0000 mgr.y (mgr.44107) 321 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.517984+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.517984+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.519723+0000 mon.c (mon.1) 332 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.519723+0000 mon.c (mon.1) 332 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: cephadm 2026-03-09T18:48:00.520293+0000 mgr.y (mgr.44107) 322 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: cephadm 2026-03-09T18:48:00.520293+0000 mgr.y (mgr.44107) 322 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.524334+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.524334+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.528510+0000 mon.c (mon.1) 333 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.528510+0000 mon.c (mon.1) 333 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.529483+0000 mon.c (mon.1) 334 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.529483+0000 mon.c (mon.1) 334 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: cephadm 2026-03-09T18:48:00.529937+0000 mgr.y (mgr.44107) 323 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: cephadm 2026-03-09T18:48:00.529937+0000 mgr.y (mgr.44107) 323 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.536359+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.536359+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.538229+0000 mon.c (mon.1) 335 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.538229+0000 mon.c (mon.1) 335 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: cephadm 2026-03-09T18:48:00.538768+0000 mgr.y (mgr.44107) 324 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: cephadm 2026-03-09T18:48:00.538768+0000 mgr.y (mgr.44107) 324 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.542019+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.542019+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.544058+0000 mon.c (mon.1) 336 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.544058+0000 mon.c (mon.1) 336 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: cephadm 2026-03-09T18:48:00.544599+0000 mgr.y (mgr.44107) 325 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: cephadm 2026-03-09T18:48:00.544599+0000 mgr.y (mgr.44107) 325 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.547568+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.547568+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.549696+0000 mon.c (mon.1) 337 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.549696+0000 mon.c (mon.1) 337 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.550644+0000 mon.c (mon.1) 338 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.883 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.550644+0000 mon.c (mon.1) 338 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.551596+0000 mon.c (mon.1) 339 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.551596+0000 mon.c (mon.1) 339 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.552538+0000 mon.c (mon.1) 340 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.552538+0000 mon.c (mon.1) 340 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.553414+0000 mon.c (mon.1) 341 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.553414+0000 mon.c (mon.1) 341 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.554401+0000 mon.c (mon.1) 342 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.554401+0000 mon.c (mon.1) 342 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: cephadm 2026-03-09T18:48:00.555183+0000 mgr.y (mgr.44107) 326 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: cephadm 2026-03-09T18:48:00.555183+0000 mgr.y (mgr.44107) 326 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.556000+0000 mon.c (mon.1) 343 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.556000+0000 mon.c (mon.1) 343 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.556231+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.556231+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.559144+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.559144+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.560575+0000 mon.c (mon.1) 344 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.560575+0000 mon.c (mon.1) 344 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.560819+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.560819+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.563578+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.563578+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.564971+0000 mon.c (mon.1) 345 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.564971+0000 mon.c (mon.1) 345 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.565187+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.565187+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.575148+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.575148+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.578926+0000 mon.c (mon.1) 346 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.578926+0000 mon.c (mon.1) 346 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.579295+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.579295+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.581960+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.581960+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.586920+0000 mon.c (mon.1) 347 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.586920+0000 mon.c (mon.1) 347 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.587354+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.587354+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.590120+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.590120+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.594668+0000 mon.c (mon.1) 348 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.594668+0000 mon.c (mon.1) 348 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.594943+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.594943+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.596394+0000 mon.c (mon.1) 349 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.596394+0000 mon.c (mon.1) 349 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.596594+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.596594+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.599597+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.599597+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.604129+0000 mon.c (mon.1) 350 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.604129+0000 mon.c (mon.1) 350 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.604357+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.604357+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.605443+0000 mon.c (mon.1) 351 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.605443+0000 mon.c (mon.1) 351 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.605641+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.605641+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.608468+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.608468+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.613236+0000 mon.c (mon.1) 352 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.613236+0000 mon.c (mon.1) 352 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.613494+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.613494+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:48:01.884 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.614562+0000 mon.c (mon.1) 353 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.614562+0000 mon.c (mon.1) 353 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.614760+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.614760+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.618589+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.618589+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.623709+0000 mon.c (mon.1) 354 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.623709+0000 mon.c (mon.1) 354 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.623943+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.623943+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.626507+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.626507+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.630801+0000 mon.c (mon.1) 355 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.630801+0000 mon.c (mon.1) 355 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.631013+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.631013+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.632049+0000 mon.c (mon.1) 356 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.632049+0000 mon.c (mon.1) 356 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.632246+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.632246+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.633281+0000 mon.c (mon.1) 357 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.633281+0000 mon.c (mon.1) 357 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.633475+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.633475+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.634472+0000 mon.c (mon.1) 358 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.634472+0000 mon.c (mon.1) 358 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.634654+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.634654+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.635877+0000 mon.c (mon.1) 359 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.635877+0000 mon.c (mon.1) 359 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.636078+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.636078+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.637115+0000 mon.c (mon.1) 360 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.637115+0000 mon.c (mon.1) 360 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.637324+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.637324+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: cephadm 2026-03-09T18:48:00.638211+0000 mgr.y (mgr.44107) 327 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: cephadm 2026-03-09T18:48:00.638211+0000 mgr.y (mgr.44107) 327 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.638611+0000 mon.c (mon.1) 361 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.638611+0000 mon.c (mon.1) 361 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.638803+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.638803+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.642027+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.642027+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.645211+0000 mon.c (mon.1) 362 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.645211+0000 mon.c (mon.1) 362 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.646807+0000 mon.c (mon.1) 363 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.646807+0000 mon.c (mon.1) 363 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.647815+0000 mon.c (mon.1) 364 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.647815+0000 mon.c (mon.1) 364 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.652671+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.652671+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.699155+0000 mon.c (mon.1) 365 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.699155+0000 mon.c (mon.1) 365 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.700832+0000 mon.c (mon.1) 366 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.700832+0000 mon.c (mon.1) 366 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.701789+0000 mon.c (mon.1) 367 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.701789+0000 mon.c (mon.1) 367 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.707011+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.707011+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.762101+0000 mon.c (mon.1) 368 : audit [DBG] from='client.? 192.168.123.100:0/3928481967' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.762101+0000 mon.c (mon.1) 368 : audit [DBG] from='client.? 192.168.123.100:0/3928481967' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.958331+0000 mgr.y (mgr.44107) 328 : audit [DBG] from='client.44479 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:01.885 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:01 vm00 bash[69512]: audit 2026-03-09T18:48:00.958331+0000 mgr.y (mgr.44107) 328 : audit [DBG] from='client.44479 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:02 vm08 bash[46122]: audit 2026-03-09T18:48:01.616813+0000 mgr.y (mgr.44107) 329 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:02 vm08 bash[46122]: audit 2026-03-09T18:48:01.616813+0000 mgr.y (mgr.44107) 329 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:02 vm08 bash[46122]: cluster 2026-03-09T18:48:02.102461+0000 mgr.y (mgr.44107) 330 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:48:02.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:02 vm08 bash[46122]: cluster 2026-03-09T18:48:02.102461+0000 mgr.y (mgr.44107) 330 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:48:02.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:02 vm00 bash[65531]: audit 2026-03-09T18:48:01.616813+0000 mgr.y (mgr.44107) 329 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:02.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:02 vm00 bash[65531]: audit 2026-03-09T18:48:01.616813+0000 mgr.y (mgr.44107) 329 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:02.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:02 vm00 bash[65531]: cluster 2026-03-09T18:48:02.102461+0000 mgr.y (mgr.44107) 330 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:48:02.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:02 vm00 bash[65531]: cluster 2026-03-09T18:48:02.102461+0000 mgr.y (mgr.44107) 330 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:48:02.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:02 vm00 bash[69512]: audit 2026-03-09T18:48:01.616813+0000 mgr.y (mgr.44107) 329 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:02.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:02 vm00 bash[69512]: audit 2026-03-09T18:48:01.616813+0000 mgr.y (mgr.44107) 329 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:02.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:02 vm00 bash[69512]: cluster 2026-03-09T18:48:02.102461+0000 mgr.y (mgr.44107) 330 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:48:02.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:02 vm00 bash[69512]: cluster 2026-03-09T18:48:02.102461+0000 mgr.y (mgr.44107) 330 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T18:48:04.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:04 vm00 bash[65531]: audit 2026-03-09T18:48:03.106269+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:04.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:04 vm00 bash[65531]: audit 2026-03-09T18:48:03.106269+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:04.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:04 vm00 bash[65531]: audit 2026-03-09T18:48:03.109054+0000 mon.c (mon.1) 369 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:48:04.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:04 vm00 bash[65531]: audit 2026-03-09T18:48:03.109054+0000 mon.c (mon.1) 369 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:48:04.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:04 vm00 bash[65531]: audit 2026-03-09T18:48:03.317261+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:04.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:04 vm00 bash[65531]: audit 2026-03-09T18:48:03.317261+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:04.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:04 vm00 bash[69512]: audit 2026-03-09T18:48:03.106269+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:04.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:04 vm00 bash[69512]: audit 2026-03-09T18:48:03.106269+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:04.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:04 vm00 bash[69512]: audit 2026-03-09T18:48:03.109054+0000 mon.c (mon.1) 369 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:48:04.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:04 vm00 bash[69512]: audit 2026-03-09T18:48:03.109054+0000 mon.c (mon.1) 369 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:48:04.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:04 vm00 bash[69512]: audit 2026-03-09T18:48:03.317261+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:04.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:04 vm00 bash[69512]: audit 2026-03-09T18:48:03.317261+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:04.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:04 vm08 bash[46122]: audit 2026-03-09T18:48:03.106269+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:04.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:04 vm08 bash[46122]: audit 2026-03-09T18:48:03.106269+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:04.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:04 vm08 bash[46122]: audit 2026-03-09T18:48:03.109054+0000 mon.c (mon.1) 369 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:48:04.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:04 vm08 bash[46122]: audit 2026-03-09T18:48:03.109054+0000 mon.c (mon.1) 369 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:48:04.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:04 vm08 bash[46122]: audit 2026-03-09T18:48:03.317261+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:04.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:04 vm08 bash[46122]: audit 2026-03-09T18:48:03.317261+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:05 vm00 bash[69512]: cluster 2026-03-09T18:48:04.102809+0000 mgr.y (mgr.44107) 331 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:48:05.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:05 vm00 bash[69512]: cluster 2026-03-09T18:48:04.102809+0000 mgr.y (mgr.44107) 331 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:48:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:05 vm00 bash[65531]: cluster 2026-03-09T18:48:04.102809+0000 mgr.y (mgr.44107) 331 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:48:05.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:05 vm00 bash[65531]: cluster 2026-03-09T18:48:04.102809+0000 mgr.y (mgr.44107) 331 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:48:05.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:05 vm08 bash[46122]: cluster 2026-03-09T18:48:04.102809+0000 mgr.y (mgr.44107) 331 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:48:05.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:05 vm08 bash[46122]: cluster 2026-03-09T18:48:04.102809+0000 mgr.y (mgr.44107) 331 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:48:07.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:07 vm00 bash[69512]: cluster 2026-03-09T18:48:06.103224+0000 mgr.y (mgr.44107) 332 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:48:07.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:07 vm00 bash[69512]: cluster 2026-03-09T18:48:06.103224+0000 mgr.y (mgr.44107) 332 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:48:07.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:07 vm00 bash[65531]: cluster 2026-03-09T18:48:06.103224+0000 mgr.y (mgr.44107) 332 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:48:07.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:07 vm00 bash[65531]: cluster 2026-03-09T18:48:06.103224+0000 mgr.y (mgr.44107) 332 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:48:07.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:07 vm08 bash[46122]: cluster 2026-03-09T18:48:06.103224+0000 mgr.y (mgr.44107) 332 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:48:07.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:07 vm08 bash[46122]: cluster 2026-03-09T18:48:06.103224+0000 mgr.y (mgr.44107) 332 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:48:09.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:09 vm08 bash[46122]: cluster 2026-03-09T18:48:08.103692+0000 mgr.y (mgr.44107) 333 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:48:09.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:09 vm08 bash[46122]: cluster 2026-03-09T18:48:08.103692+0000 mgr.y (mgr.44107) 333 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:48:09.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:09 vm00 bash[65531]: cluster 2026-03-09T18:48:08.103692+0000 mgr.y (mgr.44107) 333 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:48:09.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:09 vm00 bash[65531]: cluster 2026-03-09T18:48:08.103692+0000 mgr.y (mgr.44107) 333 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:48:09.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:09 vm00 bash[69512]: cluster 2026-03-09T18:48:08.103692+0000 mgr.y (mgr.44107) 333 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:48:09.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:09 vm00 bash[69512]: cluster 2026-03-09T18:48:08.103692+0000 mgr.y (mgr.44107) 333 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:48:09.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:48:09 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:48:09] "GET /metrics HTTP/1.1" 200 37962 "" "Prometheus/2.51.0" 2026-03-09T18:48:11.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:11 vm08 bash[46122]: cluster 2026-03-09T18:48:10.104058+0000 mgr.y (mgr.44107) 334 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:48:11.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:11 vm08 bash[46122]: cluster 2026-03-09T18:48:10.104058+0000 mgr.y (mgr.44107) 334 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:48:11.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:11 vm00 bash[69512]: cluster 2026-03-09T18:48:10.104058+0000 mgr.y (mgr.44107) 334 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:48:11.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:11 vm00 bash[69512]: cluster 2026-03-09T18:48:10.104058+0000 mgr.y (mgr.44107) 334 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:48:11.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:11 vm00 bash[65531]: cluster 2026-03-09T18:48:10.104058+0000 mgr.y (mgr.44107) 334 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:48:11.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:11 vm00 bash[65531]: cluster 2026-03-09T18:48:10.104058+0000 mgr.y (mgr.44107) 334 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T18:48:12.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:12 vm00 bash[69512]: audit 2026-03-09T18:48:11.622081+0000 mgr.y (mgr.44107) 335 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:12.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:12 vm00 bash[69512]: audit 2026-03-09T18:48:11.622081+0000 mgr.y (mgr.44107) 335 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:12.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:12 vm00 bash[69512]: cluster 2026-03-09T18:48:12.104482+0000 mgr.y (mgr.44107) 336 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 883 B/s rd, 0 op/s 2026-03-09T18:48:12.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:12 vm00 bash[69512]: cluster 2026-03-09T18:48:12.104482+0000 mgr.y (mgr.44107) 336 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 883 B/s rd, 0 op/s 2026-03-09T18:48:12.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:12 vm00 bash[65531]: audit 2026-03-09T18:48:11.622081+0000 mgr.y (mgr.44107) 335 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:12.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:12 vm00 bash[65531]: audit 2026-03-09T18:48:11.622081+0000 mgr.y (mgr.44107) 335 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:12.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:12 vm00 bash[65531]: cluster 2026-03-09T18:48:12.104482+0000 mgr.y (mgr.44107) 336 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 883 B/s rd, 0 op/s 2026-03-09T18:48:12.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:12 vm00 bash[65531]: cluster 2026-03-09T18:48:12.104482+0000 mgr.y (mgr.44107) 336 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 883 B/s rd, 0 op/s 2026-03-09T18:48:12.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:12 vm08 bash[46122]: audit 2026-03-09T18:48:11.622081+0000 mgr.y (mgr.44107) 335 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:12.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:12 vm08 bash[46122]: audit 2026-03-09T18:48:11.622081+0000 mgr.y (mgr.44107) 335 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:12.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:12 vm08 bash[46122]: cluster 2026-03-09T18:48:12.104482+0000 mgr.y (mgr.44107) 336 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 883 B/s rd, 0 op/s 2026-03-09T18:48:12.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:12 vm08 bash[46122]: cluster 2026-03-09T18:48:12.104482+0000 mgr.y (mgr.44107) 336 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 883 B/s rd, 0 op/s 2026-03-09T18:48:15.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:15 vm08 bash[46122]: cluster 2026-03-09T18:48:14.104799+0000 mgr.y (mgr.44107) 337 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:15.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:15 vm08 bash[46122]: cluster 2026-03-09T18:48:14.104799+0000 mgr.y (mgr.44107) 337 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:15.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:15 vm00 bash[69512]: cluster 2026-03-09T18:48:14.104799+0000 mgr.y (mgr.44107) 337 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:15.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:15 vm00 bash[69512]: cluster 2026-03-09T18:48:14.104799+0000 mgr.y (mgr.44107) 337 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:15 vm00 bash[65531]: cluster 2026-03-09T18:48:14.104799+0000 mgr.y (mgr.44107) 337 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:15 vm00 bash[65531]: cluster 2026-03-09T18:48:14.104799+0000 mgr.y (mgr.44107) 337 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:17.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:17 vm08 bash[46122]: cluster 2026-03-09T18:48:16.105153+0000 mgr.y (mgr.44107) 338 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:17.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:17 vm08 bash[46122]: cluster 2026-03-09T18:48:16.105153+0000 mgr.y (mgr.44107) 338 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:17.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:17 vm00 bash[69512]: cluster 2026-03-09T18:48:16.105153+0000 mgr.y (mgr.44107) 338 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:17 vm00 bash[69512]: cluster 2026-03-09T18:48:16.105153+0000 mgr.y (mgr.44107) 338 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:17 vm00 bash[65531]: cluster 2026-03-09T18:48:16.105153+0000 mgr.y (mgr.44107) 338 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:17 vm00 bash[65531]: cluster 2026-03-09T18:48:16.105153+0000 mgr.y (mgr.44107) 338 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:18.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:18 vm08 bash[46122]: audit 2026-03-09T18:48:18.102041+0000 mon.c (mon.1) 370 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:48:18.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:18 vm08 bash[46122]: audit 2026-03-09T18:48:18.102041+0000 mon.c (mon.1) 370 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:48:18.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:18 vm00 bash[69512]: audit 2026-03-09T18:48:18.102041+0000 mon.c (mon.1) 370 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:48:18.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:18 vm00 bash[69512]: audit 2026-03-09T18:48:18.102041+0000 mon.c (mon.1) 370 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:48:18.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:18 vm00 bash[65531]: audit 2026-03-09T18:48:18.102041+0000 mon.c (mon.1) 370 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:48:18.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:18 vm00 bash[65531]: audit 2026-03-09T18:48:18.102041+0000 mon.c (mon.1) 370 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:48:19.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:19 vm08 bash[46122]: cluster 2026-03-09T18:48:18.105528+0000 mgr.y (mgr.44107) 339 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:19.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:19 vm08 bash[46122]: cluster 2026-03-09T18:48:18.105528+0000 mgr.y (mgr.44107) 339 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:19.523 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:19 vm00 bash[69512]: cluster 2026-03-09T18:48:18.105528+0000 mgr.y (mgr.44107) 339 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:19.523 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:19 vm00 bash[69512]: cluster 2026-03-09T18:48:18.105528+0000 mgr.y (mgr.44107) 339 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:19.523 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:19 vm00 bash[65531]: cluster 2026-03-09T18:48:18.105528+0000 mgr.y (mgr.44107) 339 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:19.523 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:19 vm00 bash[65531]: cluster 2026-03-09T18:48:18.105528+0000 mgr.y (mgr.44107) 339 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:19.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:48:19 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:48:19] "GET /metrics HTTP/1.1" 200 37962 "" "Prometheus/2.51.0" 2026-03-09T18:48:21.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:21 vm08 bash[46122]: cluster 2026-03-09T18:48:20.105849+0000 mgr.y (mgr.44107) 340 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:21.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:21 vm08 bash[46122]: cluster 2026-03-09T18:48:20.105849+0000 mgr.y (mgr.44107) 340 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:21.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:21 vm00 bash[65531]: cluster 2026-03-09T18:48:20.105849+0000 mgr.y (mgr.44107) 340 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:21.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:21 vm00 bash[65531]: cluster 2026-03-09T18:48:20.105849+0000 mgr.y (mgr.44107) 340 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:21.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:21 vm00 bash[69512]: cluster 2026-03-09T18:48:20.105849+0000 mgr.y (mgr.44107) 340 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:21.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:21 vm00 bash[69512]: cluster 2026-03-09T18:48:20.105849+0000 mgr.y (mgr.44107) 340 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:23.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:23 vm08 bash[46122]: audit 2026-03-09T18:48:21.632543+0000 mgr.y (mgr.44107) 341 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:23.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:23 vm08 bash[46122]: audit 2026-03-09T18:48:21.632543+0000 mgr.y (mgr.44107) 341 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:23.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:23 vm08 bash[46122]: cluster 2026-03-09T18:48:22.106355+0000 mgr.y (mgr.44107) 342 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:23.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:23 vm08 bash[46122]: cluster 2026-03-09T18:48:22.106355+0000 mgr.y (mgr.44107) 342 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:23.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:23 vm00 bash[69512]: audit 2026-03-09T18:48:21.632543+0000 mgr.y (mgr.44107) 341 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:23.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:23 vm00 bash[69512]: audit 2026-03-09T18:48:21.632543+0000 mgr.y (mgr.44107) 341 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:23.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:23 vm00 bash[69512]: cluster 2026-03-09T18:48:22.106355+0000 mgr.y (mgr.44107) 342 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:23.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:23 vm00 bash[69512]: cluster 2026-03-09T18:48:22.106355+0000 mgr.y (mgr.44107) 342 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:23.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:23 vm00 bash[65531]: audit 2026-03-09T18:48:21.632543+0000 mgr.y (mgr.44107) 341 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:23.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:23 vm00 bash[65531]: audit 2026-03-09T18:48:21.632543+0000 mgr.y (mgr.44107) 341 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:23.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:23 vm00 bash[65531]: cluster 2026-03-09T18:48:22.106355+0000 mgr.y (mgr.44107) 342 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:23.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:23 vm00 bash[65531]: cluster 2026-03-09T18:48:22.106355+0000 mgr.y (mgr.44107) 342 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:25.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:25 vm08 bash[46122]: cluster 2026-03-09T18:48:24.106720+0000 mgr.y (mgr.44107) 343 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:25.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:25 vm08 bash[46122]: cluster 2026-03-09T18:48:24.106720+0000 mgr.y (mgr.44107) 343 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:25.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:25 vm00 bash[69512]: cluster 2026-03-09T18:48:24.106720+0000 mgr.y (mgr.44107) 343 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:25.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:25 vm00 bash[69512]: cluster 2026-03-09T18:48:24.106720+0000 mgr.y (mgr.44107) 343 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:25.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:25 vm00 bash[65531]: cluster 2026-03-09T18:48:24.106720+0000 mgr.y (mgr.44107) 343 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:25.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:25 vm00 bash[65531]: cluster 2026-03-09T18:48:24.106720+0000 mgr.y (mgr.44107) 343 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:27.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:27 vm08 bash[46122]: cluster 2026-03-09T18:48:26.107105+0000 mgr.y (mgr.44107) 344 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:27.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:27 vm08 bash[46122]: cluster 2026-03-09T18:48:26.107105+0000 mgr.y (mgr.44107) 344 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:27.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:27 vm00 bash[69512]: cluster 2026-03-09T18:48:26.107105+0000 mgr.y (mgr.44107) 344 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:27.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:27 vm00 bash[69512]: cluster 2026-03-09T18:48:26.107105+0000 mgr.y (mgr.44107) 344 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:27.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:27 vm00 bash[65531]: cluster 2026-03-09T18:48:26.107105+0000 mgr.y (mgr.44107) 344 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:27.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:27 vm00 bash[65531]: cluster 2026-03-09T18:48:26.107105+0000 mgr.y (mgr.44107) 344 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:29.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:29 vm08 bash[46122]: cluster 2026-03-09T18:48:28.107408+0000 mgr.y (mgr.44107) 345 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:29.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:29 vm08 bash[46122]: cluster 2026-03-09T18:48:28.107408+0000 mgr.y (mgr.44107) 345 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:29.522 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:29 vm00 bash[69512]: cluster 2026-03-09T18:48:28.107408+0000 mgr.y (mgr.44107) 345 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:29.522 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:29 vm00 bash[69512]: cluster 2026-03-09T18:48:28.107408+0000 mgr.y (mgr.44107) 345 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:29.522 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:29 vm00 bash[65531]: cluster 2026-03-09T18:48:28.107408+0000 mgr.y (mgr.44107) 345 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:29.522 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:29 vm00 bash[65531]: cluster 2026-03-09T18:48:28.107408+0000 mgr.y (mgr.44107) 345 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:29.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:48:29 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:48:29] "GET /metrics HTTP/1.1" 200 37962 "" "Prometheus/2.51.0" 2026-03-09T18:48:31.236 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-09T18:48:31.473 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:31 vm00 bash[65531]: cluster 2026-03-09T18:48:30.107648+0000 mgr.y (mgr.44107) 346 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:31.473 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:31 vm00 bash[65531]: cluster 2026-03-09T18:48:30.107648+0000 mgr.y (mgr.44107) 346 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:31.474 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:31 vm00 bash[69512]: cluster 2026-03-09T18:48:30.107648+0000 mgr.y (mgr.44107) 346 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:31.474 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:31 vm00 bash[69512]: cluster 2026-03-09T18:48:30.107648+0000 mgr.y (mgr.44107) 346 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:31.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:31 vm08 bash[46122]: cluster 2026-03-09T18:48:30.107648+0000 mgr.y (mgr.44107) 346 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:31.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:31 vm08 bash[46122]: cluster 2026-03-09T18:48:30.107648+0000 mgr.y (mgr.44107) 346 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:31.672 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:48:31.672 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (18m) 115s ago 25m 14.5M - 0.25.0 c8568f914cd2 2a8a29ecee54 2026-03-09T18:48:31.672 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (6m) 38s ago 25m 67.1M - 10.4.0 c8b91775d855 5e0e30d27ab2 2026-03-09T18:48:31.672 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (6m) 115s ago 25m 44.2M - 3.5 e1d6a67b021e ff3da66cebe9 2026-03-09T18:48:31.672 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443,9283,8765 running (6m) 38s ago 28m 466M - 19.2.3-678-ge911bdeb 654f31e6858e e51f08afe84e 2026-03-09T18:48:31.672 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:8443,9283,8765 running (15m) 115s ago 29m 530M - 19.2.3-678-ge911bdeb 654f31e6858e 838266ced7b2 2026-03-09T18:48:31.672 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (4m) 115s ago 29m 49.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e eb9fca83668a 2026-03-09T18:48:31.672 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (5m) 38s ago 28m 48.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a343d2673f4 2026-03-09T18:48:31.672 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (5m) 115s ago 28m 46.3M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 5c50cbcbef6b 2026-03-09T18:48:31.672 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (18m) 115s ago 25m 8028k - 1.7.0 72c9c2088986 c2e3e3202fde 2026-03-09T18:48:31.673 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (18m) 38s ago 25m 8055k - 1.7.0 72c9c2088986 7a7d1ed8c801 2026-03-09T18:48:31.673 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (2m) 115s ago 28m 45.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1334681baf1a 2026-03-09T18:48:31.673 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (2m) 115s ago 27m 22.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e b0cddb861a9d 2026-03-09T18:48:31.673 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (3m) 115s ago 27m 45.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9a838e294e64 2026-03-09T18:48:31.673 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (3m) 115s ago 27m 69.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 161fbb574888 2026-03-09T18:48:31.673 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (104s) 38s ago 27m 51.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7575a2bf51cd 2026-03-09T18:48:31.673 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (73s) 38s ago 26m 68.6M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9263a2afad40 2026-03-09T18:48:31.673 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (57s) 38s ago 26m 45.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e b5db37a03fe5 2026-03-09T18:48:31.673 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (43s) 38s ago 26m 22.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9904fad47d23 2026-03-09T18:48:31.673 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (6m) 38s ago 25m 43.5M - 2.51.0 1d3b7f56885b 2dc1789f7bf0 2026-03-09T18:48:31.673 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (25m) 115s ago 25m 89.4M - 17.2.0 e1d6a67b021e 671fa80b7e00 2026-03-09T18:48:31.673 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (25m) 38s ago 25m 90.8M - 17.2.0 e1d6a67b021e 1fbcce983317 2026-03-09T18:48:31.718 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.osd | length == 1'"'"'' 2026-03-09T18:48:32.188 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:48:32.228 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.osd | keys'"'"' | grep $sha1' 2026-03-09T18:48:32.454 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:32 vm00 bash[65531]: audit 2026-03-09T18:48:31.167720+0000 mgr.y (mgr.44107) 347 : audit [DBG] from='client.54471 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:32.454 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:32 vm00 bash[65531]: audit 2026-03-09T18:48:31.167720+0000 mgr.y (mgr.44107) 347 : audit [DBG] from='client.54471 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:32.454 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:32 vm00 bash[65531]: audit 2026-03-09T18:48:32.182075+0000 mon.a (mon.0) 555 : audit [DBG] from='client.? 192.168.123.100:0/1367222255' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:32.454 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:32 vm00 bash[65531]: audit 2026-03-09T18:48:32.182075+0000 mon.a (mon.0) 555 : audit [DBG] from='client.? 192.168.123.100:0/1367222255' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:32.454 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:32 vm00 bash[69512]: audit 2026-03-09T18:48:31.167720+0000 mgr.y (mgr.44107) 347 : audit [DBG] from='client.54471 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:32.454 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:32 vm00 bash[69512]: audit 2026-03-09T18:48:31.167720+0000 mgr.y (mgr.44107) 347 : audit [DBG] from='client.54471 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:32.454 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:32 vm00 bash[69512]: audit 2026-03-09T18:48:32.182075+0000 mon.a (mon.0) 555 : audit [DBG] from='client.? 192.168.123.100:0/1367222255' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:32.454 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:32 vm00 bash[69512]: audit 2026-03-09T18:48:32.182075+0000 mon.a (mon.0) 555 : audit [DBG] from='client.? 192.168.123.100:0/1367222255' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:32.703 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)" 2026-03-09T18:48:32.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:32 vm08 bash[46122]: audit 2026-03-09T18:48:31.167720+0000 mgr.y (mgr.44107) 347 : audit [DBG] from='client.54471 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:32.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:32 vm08 bash[46122]: audit 2026-03-09T18:48:31.167720+0000 mgr.y (mgr.44107) 347 : audit [DBG] from='client.54471 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:32.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:32 vm08 bash[46122]: audit 2026-03-09T18:48:32.182075+0000 mon.a (mon.0) 555 : audit [DBG] from='client.? 192.168.123.100:0/1367222255' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:32.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:32 vm08 bash[46122]: audit 2026-03-09T18:48:32.182075+0000 mon.a (mon.0) 555 : audit [DBG] from='client.? 192.168.123.100:0/1367222255' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:32.741 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade status' 2026-03-09T18:48:33.145 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:48:33.145 INFO:teuthology.orchestra.run.vm00.stdout: "target_image": null, 2026-03-09T18:48:33.145 INFO:teuthology.orchestra.run.vm00.stdout: "in_progress": false, 2026-03-09T18:48:33.145 INFO:teuthology.orchestra.run.vm00.stdout: "which": "", 2026-03-09T18:48:33.145 INFO:teuthology.orchestra.run.vm00.stdout: "services_complete": [], 2026-03-09T18:48:33.145 INFO:teuthology.orchestra.run.vm00.stdout: "progress": null, 2026-03-09T18:48:33.145 INFO:teuthology.orchestra.run.vm00.stdout: "message": "", 2026-03-09T18:48:33.145 INFO:teuthology.orchestra.run.vm00.stdout: "is_paused": false 2026-03-09T18:48:33.145 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:48:33.190 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph health detail' 2026-03-09T18:48:33.429 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:33 vm00 bash[65531]: audit 2026-03-09T18:48:31.641533+0000 mgr.y (mgr.44107) 348 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:33.429 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:33 vm00 bash[65531]: audit 2026-03-09T18:48:31.641533+0000 mgr.y (mgr.44107) 348 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:33.429 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:33 vm00 bash[65531]: audit 2026-03-09T18:48:31.672560+0000 mgr.y (mgr.44107) 349 : audit [DBG] from='client.44491 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:33.429 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:33 vm00 bash[65531]: audit 2026-03-09T18:48:31.672560+0000 mgr.y (mgr.44107) 349 : audit [DBG] from='client.44491 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:33.429 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:33 vm00 bash[65531]: cluster 2026-03-09T18:48:32.108072+0000 mgr.y (mgr.44107) 350 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:33.429 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:33 vm00 bash[65531]: cluster 2026-03-09T18:48:32.108072+0000 mgr.y (mgr.44107) 350 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:33.429 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:33 vm00 bash[65531]: audit 2026-03-09T18:48:32.696129+0000 mon.a (mon.0) 556 : audit [DBG] from='client.? 192.168.123.100:0/574353606' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:33.429 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:33 vm00 bash[65531]: audit 2026-03-09T18:48:32.696129+0000 mon.a (mon.0) 556 : audit [DBG] from='client.? 192.168.123.100:0/574353606' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:33.429 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:33 vm00 bash[65531]: audit 2026-03-09T18:48:33.102369+0000 mon.c (mon.1) 371 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:48:33.429 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:33 vm00 bash[65531]: audit 2026-03-09T18:48:33.102369+0000 mon.c (mon.1) 371 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:48:33.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:33 vm00 bash[69512]: audit 2026-03-09T18:48:31.641533+0000 mgr.y (mgr.44107) 348 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:33.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:33 vm00 bash[69512]: audit 2026-03-09T18:48:31.641533+0000 mgr.y (mgr.44107) 348 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:33.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:33 vm00 bash[69512]: audit 2026-03-09T18:48:31.672560+0000 mgr.y (mgr.44107) 349 : audit [DBG] from='client.44491 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:33.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:33 vm00 bash[69512]: audit 2026-03-09T18:48:31.672560+0000 mgr.y (mgr.44107) 349 : audit [DBG] from='client.44491 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:33.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:33 vm00 bash[69512]: cluster 2026-03-09T18:48:32.108072+0000 mgr.y (mgr.44107) 350 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:33.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:33 vm00 bash[69512]: cluster 2026-03-09T18:48:32.108072+0000 mgr.y (mgr.44107) 350 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:33.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:33 vm00 bash[69512]: audit 2026-03-09T18:48:32.696129+0000 mon.a (mon.0) 556 : audit [DBG] from='client.? 192.168.123.100:0/574353606' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:33.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:33 vm00 bash[69512]: audit 2026-03-09T18:48:32.696129+0000 mon.a (mon.0) 556 : audit [DBG] from='client.? 192.168.123.100:0/574353606' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:33.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:33 vm00 bash[69512]: audit 2026-03-09T18:48:33.102369+0000 mon.c (mon.1) 371 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:48:33.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:33 vm00 bash[69512]: audit 2026-03-09T18:48:33.102369+0000 mon.c (mon.1) 371 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:48:33.671 INFO:teuthology.orchestra.run.vm00.stdout:HEALTH_OK 2026-03-09T18:48:33.715 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --services rgw.foo' 2026-03-09T18:48:33.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:33 vm08 bash[46122]: audit 2026-03-09T18:48:31.641533+0000 mgr.y (mgr.44107) 348 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:33.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:33 vm08 bash[46122]: audit 2026-03-09T18:48:31.641533+0000 mgr.y (mgr.44107) 348 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:33.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:33 vm08 bash[46122]: audit 2026-03-09T18:48:31.672560+0000 mgr.y (mgr.44107) 349 : audit [DBG] from='client.44491 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:33.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:33 vm08 bash[46122]: audit 2026-03-09T18:48:31.672560+0000 mgr.y (mgr.44107) 349 : audit [DBG] from='client.44491 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:33.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:33 vm08 bash[46122]: cluster 2026-03-09T18:48:32.108072+0000 mgr.y (mgr.44107) 350 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:33.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:33 vm08 bash[46122]: cluster 2026-03-09T18:48:32.108072+0000 mgr.y (mgr.44107) 350 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:33.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:33 vm08 bash[46122]: audit 2026-03-09T18:48:32.696129+0000 mon.a (mon.0) 556 : audit [DBG] from='client.? 192.168.123.100:0/574353606' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:33.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:33 vm08 bash[46122]: audit 2026-03-09T18:48:32.696129+0000 mon.a (mon.0) 556 : audit [DBG] from='client.? 192.168.123.100:0/574353606' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:33.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:33 vm08 bash[46122]: audit 2026-03-09T18:48:33.102369+0000 mon.c (mon.1) 371 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:48:33.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:33 vm08 bash[46122]: audit 2026-03-09T18:48:33.102369+0000 mon.c (mon.1) 371 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:48:34.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:34 vm00 bash[65531]: audit 2026-03-09T18:48:33.148920+0000 mgr.y (mgr.44107) 351 : audit [DBG] from='client.44500 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:34.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:34 vm00 bash[65531]: audit 2026-03-09T18:48:33.148920+0000 mgr.y (mgr.44107) 351 : audit [DBG] from='client.44500 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:34.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:34 vm00 bash[65531]: audit 2026-03-09T18:48:33.674557+0000 mon.a (mon.0) 557 : audit [DBG] from='client.? 192.168.123.100:0/1572087695' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:48:34.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:34 vm00 bash[65531]: audit 2026-03-09T18:48:33.674557+0000 mon.a (mon.0) 557 : audit [DBG] from='client.? 192.168.123.100:0/1572087695' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:48:34.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:34 vm00 bash[69512]: audit 2026-03-09T18:48:33.148920+0000 mgr.y (mgr.44107) 351 : audit [DBG] from='client.44500 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:34.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:34 vm00 bash[69512]: audit 2026-03-09T18:48:33.148920+0000 mgr.y (mgr.44107) 351 : audit [DBG] from='client.44500 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:34.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:34 vm00 bash[69512]: audit 2026-03-09T18:48:33.674557+0000 mon.a (mon.0) 557 : audit [DBG] from='client.? 192.168.123.100:0/1572087695' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:48:34.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:34 vm00 bash[69512]: audit 2026-03-09T18:48:33.674557+0000 mon.a (mon.0) 557 : audit [DBG] from='client.? 192.168.123.100:0/1572087695' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:48:34.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:34 vm08 bash[46122]: audit 2026-03-09T18:48:33.148920+0000 mgr.y (mgr.44107) 351 : audit [DBG] from='client.44500 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:34.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:34 vm08 bash[46122]: audit 2026-03-09T18:48:33.148920+0000 mgr.y (mgr.44107) 351 : audit [DBG] from='client.44500 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:34.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:34 vm08 bash[46122]: audit 2026-03-09T18:48:33.674557+0000 mon.a (mon.0) 557 : audit [DBG] from='client.? 192.168.123.100:0/1572087695' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:48:34.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:34 vm08 bash[46122]: audit 2026-03-09T18:48:33.674557+0000 mon.a (mon.0) 557 : audit [DBG] from='client.? 192.168.123.100:0/1572087695' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:48:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:35 vm00 bash[65531]: cluster 2026-03-09T18:48:34.108344+0000 mgr.y (mgr.44107) 352 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:35 vm00 bash[65531]: cluster 2026-03-09T18:48:34.108344+0000 mgr.y (mgr.44107) 352 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:35 vm00 bash[65531]: audit 2026-03-09T18:48:34.136782+0000 mgr.y (mgr.44107) 353 : audit [DBG] from='client.34427 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "services": "rgw.foo", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:35 vm00 bash[65531]: audit 2026-03-09T18:48:34.136782+0000 mgr.y (mgr.44107) 353 : audit [DBG] from='client.34427 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "services": "rgw.foo", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:35 vm00 bash[69512]: cluster 2026-03-09T18:48:34.108344+0000 mgr.y (mgr.44107) 352 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:35 vm00 bash[69512]: cluster 2026-03-09T18:48:34.108344+0000 mgr.y (mgr.44107) 352 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:35 vm00 bash[69512]: audit 2026-03-09T18:48:34.136782+0000 mgr.y (mgr.44107) 353 : audit [DBG] from='client.34427 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "services": "rgw.foo", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:35 vm00 bash[69512]: audit 2026-03-09T18:48:34.136782+0000 mgr.y (mgr.44107) 353 : audit [DBG] from='client.34427 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "services": "rgw.foo", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:35.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:35 vm08 bash[46122]: cluster 2026-03-09T18:48:34.108344+0000 mgr.y (mgr.44107) 352 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:35.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:35 vm08 bash[46122]: cluster 2026-03-09T18:48:34.108344+0000 mgr.y (mgr.44107) 352 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:35.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:35 vm08 bash[46122]: audit 2026-03-09T18:48:34.136782+0000 mgr.y (mgr.44107) 353 : audit [DBG] from='client.34427 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "services": "rgw.foo", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:35.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:35 vm08 bash[46122]: audit 2026-03-09T18:48:34.136782+0000 mgr.y (mgr.44107) 353 : audit [DBG] from='client.34427 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "services": "rgw.foo", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:35.727 INFO:teuthology.orchestra.run.vm00.stdout:Initiating upgrade to quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:48:35.804 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'while ceph orch upgrade status | jq '"'"'.in_progress'"'"' | grep true && ! ceph orch upgrade status | jq '"'"'.message'"'"' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done' 2026-03-09T18:48:36.293 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:48:36.674 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:48:36.674 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (18m) 2m ago 25m 14.5M - 0.25.0 c8568f914cd2 2a8a29ecee54 2026-03-09T18:48:36.674 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (6m) 43s ago 25m 67.1M - 10.4.0 c8b91775d855 5e0e30d27ab2 2026-03-09T18:48:36.674 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (6m) 2m ago 25m 44.2M - 3.5 e1d6a67b021e ff3da66cebe9 2026-03-09T18:48:36.674 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443,9283,8765 running (6m) 43s ago 28m 466M - 19.2.3-678-ge911bdeb 654f31e6858e e51f08afe84e 2026-03-09T18:48:36.674 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:8443,9283,8765 running (16m) 2m ago 29m 530M - 19.2.3-678-ge911bdeb 654f31e6858e 838266ced7b2 2026-03-09T18:48:36.674 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (4m) 2m ago 29m 49.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e eb9fca83668a 2026-03-09T18:48:36.674 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (5m) 43s ago 28m 48.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a343d2673f4 2026-03-09T18:48:36.674 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (5m) 2m ago 28m 46.3M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 5c50cbcbef6b 2026-03-09T18:48:36.674 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (18m) 2m ago 26m 8028k - 1.7.0 72c9c2088986 c2e3e3202fde 2026-03-09T18:48:36.674 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (18m) 43s ago 26m 8055k - 1.7.0 72c9c2088986 7a7d1ed8c801 2026-03-09T18:48:36.674 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (2m) 2m ago 28m 45.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1334681baf1a 2026-03-09T18:48:36.674 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (2m) 2m ago 28m 22.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e b0cddb861a9d 2026-03-09T18:48:36.674 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (3m) 2m ago 27m 45.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9a838e294e64 2026-03-09T18:48:36.674 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (3m) 2m ago 27m 69.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 161fbb574888 2026-03-09T18:48:36.675 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (109s) 43s ago 27m 51.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7575a2bf51cd 2026-03-09T18:48:36.675 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (78s) 43s ago 26m 68.6M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9263a2afad40 2026-03-09T18:48:36.675 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (62s) 43s ago 26m 45.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e b5db37a03fe5 2026-03-09T18:48:36.675 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (48s) 43s ago 26m 22.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9904fad47d23 2026-03-09T18:48:36.675 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (6m) 43s ago 25m 43.5M - 2.51.0 1d3b7f56885b 2dc1789f7bf0 2026-03-09T18:48:36.675 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (25m) 2m ago 25m 89.4M - 17.2.0 e1d6a67b021e 671fa80b7e00 2026-03-09T18:48:36.675 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (25m) 43s ago 25m 90.8M - 17.2.0 e1d6a67b021e 1fbcce983317 2026-03-09T18:48:36.914 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:48:36.914 INFO:teuthology.orchestra.run.vm00.stdout: "mon": { 2026-03-09T18:48:36.915 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-09T18:48:36.915 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:48:36.915 INFO:teuthology.orchestra.run.vm00.stdout: "mgr": { 2026-03-09T18:48:36.915 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:48:36.915 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:48:36.915 INFO:teuthology.orchestra.run.vm00.stdout: "osd": { 2026-03-09T18:48:36.915 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 8 2026-03-09T18:48:36.915 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:48:36.915 INFO:teuthology.orchestra.run.vm00.stdout: "rgw": { 2026-03-09T18:48:36.915 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-09T18:48:36.915 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:48:36.915 INFO:teuthology.orchestra.run.vm00.stdout: "overall": { 2026-03-09T18:48:36.915 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2, 2026-03-09T18:48:36.915 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 13 2026-03-09T18:48:36.915 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:48:36.915 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:48:37.077 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:36 vm00 bash[69512]: cephadm 2026-03-09T18:48:35.723518+0000 mgr.y (mgr.44107) 354 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:36 vm00 bash[69512]: cephadm 2026-03-09T18:48:35.723518+0000 mgr.y (mgr.44107) 354 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:36 vm00 bash[69512]: audit 2026-03-09T18:48:35.728358+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:36 vm00 bash[69512]: audit 2026-03-09T18:48:35.728358+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:36 vm00 bash[69512]: audit 2026-03-09T18:48:35.731318+0000 mon.c (mon.1) 372 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:36 vm00 bash[69512]: audit 2026-03-09T18:48:35.731318+0000 mon.c (mon.1) 372 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:36 vm00 bash[69512]: audit 2026-03-09T18:48:35.735883+0000 mon.c (mon.1) 373 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:36 vm00 bash[69512]: audit 2026-03-09T18:48:35.735883+0000 mon.c (mon.1) 373 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:36 vm00 bash[69512]: audit 2026-03-09T18:48:35.738336+0000 mon.c (mon.1) 374 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:36 vm00 bash[69512]: audit 2026-03-09T18:48:35.738336+0000 mon.c (mon.1) 374 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:36 vm00 bash[69512]: audit 2026-03-09T18:48:35.744541+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:36 vm00 bash[69512]: audit 2026-03-09T18:48:35.744541+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:36 vm00 bash[69512]: cephadm 2026-03-09T18:48:35.798987+0000 mgr.y (mgr.44107) 355 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:36 vm00 bash[69512]: cephadm 2026-03-09T18:48:35.798987+0000 mgr.y (mgr.44107) 355 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:36 vm00 bash[69512]: cluster 2026-03-09T18:48:36.108776+0000 mgr.y (mgr.44107) 356 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:36 vm00 bash[69512]: cluster 2026-03-09T18:48:36.108776+0000 mgr.y (mgr.44107) 356 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:36 vm00 bash[69512]: audit 2026-03-09T18:48:36.285684+0000 mgr.y (mgr.44107) 357 : audit [DBG] from='client.44515 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:36 vm00 bash[69512]: audit 2026-03-09T18:48:36.285684+0000 mgr.y (mgr.44107) 357 : audit [DBG] from='client.44515 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:36 vm00 bash[65531]: cephadm 2026-03-09T18:48:35.723518+0000 mgr.y (mgr.44107) 354 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:36 vm00 bash[65531]: cephadm 2026-03-09T18:48:35.723518+0000 mgr.y (mgr.44107) 354 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:36 vm00 bash[65531]: audit 2026-03-09T18:48:35.728358+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:36 vm00 bash[65531]: audit 2026-03-09T18:48:35.728358+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:36 vm00 bash[65531]: audit 2026-03-09T18:48:35.731318+0000 mon.c (mon.1) 372 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:36 vm00 bash[65531]: audit 2026-03-09T18:48:35.731318+0000 mon.c (mon.1) 372 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:36 vm00 bash[65531]: audit 2026-03-09T18:48:35.735883+0000 mon.c (mon.1) 373 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:36 vm00 bash[65531]: audit 2026-03-09T18:48:35.735883+0000 mon.c (mon.1) 373 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:36 vm00 bash[65531]: audit 2026-03-09T18:48:35.738336+0000 mon.c (mon.1) 374 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:36 vm00 bash[65531]: audit 2026-03-09T18:48:35.738336+0000 mon.c (mon.1) 374 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:36 vm00 bash[65531]: audit 2026-03-09T18:48:35.744541+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:36 vm00 bash[65531]: audit 2026-03-09T18:48:35.744541+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:36 vm00 bash[65531]: cephadm 2026-03-09T18:48:35.798987+0000 mgr.y (mgr.44107) 355 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:36 vm00 bash[65531]: cephadm 2026-03-09T18:48:35.798987+0000 mgr.y (mgr.44107) 355 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:36 vm00 bash[65531]: cluster 2026-03-09T18:48:36.108776+0000 mgr.y (mgr.44107) 356 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:36 vm00 bash[65531]: cluster 2026-03-09T18:48:36.108776+0000 mgr.y (mgr.44107) 356 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:36 vm00 bash[65531]: audit 2026-03-09T18:48:36.285684+0000 mgr.y (mgr.44107) 357 : audit [DBG] from='client.44515 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:37.078 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:36 vm00 bash[65531]: audit 2026-03-09T18:48:36.285684+0000 mgr.y (mgr.44107) 357 : audit [DBG] from='client.44515 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:37.150 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:48:37.150 INFO:teuthology.orchestra.run.vm00.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-09T18:48:37.150 INFO:teuthology.orchestra.run.vm00.stdout: "in_progress": true, 2026-03-09T18:48:37.150 INFO:teuthology.orchestra.run.vm00.stdout: "which": "Upgrading daemons in service(s) rgw.foo", 2026-03-09T18:48:37.150 INFO:teuthology.orchestra.run.vm00.stdout: "services_complete": [], 2026-03-09T18:48:37.150 INFO:teuthology.orchestra.run.vm00.stdout: "progress": "", 2026-03-09T18:48:37.150 INFO:teuthology.orchestra.run.vm00.stdout: "message": "Doing first pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df image", 2026-03-09T18:48:37.150 INFO:teuthology.orchestra.run.vm00.stdout: "is_paused": false 2026-03-09T18:48:37.150 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:48:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:36 vm08 bash[46122]: cephadm 2026-03-09T18:48:35.723518+0000 mgr.y (mgr.44107) 354 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:48:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:36 vm08 bash[46122]: cephadm 2026-03-09T18:48:35.723518+0000 mgr.y (mgr.44107) 354 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:48:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:36 vm08 bash[46122]: audit 2026-03-09T18:48:35.728358+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:36 vm08 bash[46122]: audit 2026-03-09T18:48:35.728358+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:36 vm08 bash[46122]: audit 2026-03-09T18:48:35.731318+0000 mon.c (mon.1) 372 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:36 vm08 bash[46122]: audit 2026-03-09T18:48:35.731318+0000 mon.c (mon.1) 372 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:36 vm08 bash[46122]: audit 2026-03-09T18:48:35.735883+0000 mon.c (mon.1) 373 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:36 vm08 bash[46122]: audit 2026-03-09T18:48:35.735883+0000 mon.c (mon.1) 373 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:36 vm08 bash[46122]: audit 2026-03-09T18:48:35.738336+0000 mon.c (mon.1) 374 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:36 vm08 bash[46122]: audit 2026-03-09T18:48:35.738336+0000 mon.c (mon.1) 374 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:36 vm08 bash[46122]: audit 2026-03-09T18:48:35.744541+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:36 vm08 bash[46122]: audit 2026-03-09T18:48:35.744541+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:36 vm08 bash[46122]: cephadm 2026-03-09T18:48:35.798987+0000 mgr.y (mgr.44107) 355 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:48:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:36 vm08 bash[46122]: cephadm 2026-03-09T18:48:35.798987+0000 mgr.y (mgr.44107) 355 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:48:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:36 vm08 bash[46122]: cluster 2026-03-09T18:48:36.108776+0000 mgr.y (mgr.44107) 356 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:36 vm08 bash[46122]: cluster 2026-03-09T18:48:36.108776+0000 mgr.y (mgr.44107) 356 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:48:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:36 vm08 bash[46122]: audit 2026-03-09T18:48:36.285684+0000 mgr.y (mgr.44107) 357 : audit [DBG] from='client.44515 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:37.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:36 vm08 bash[46122]: audit 2026-03-09T18:48:36.285684+0000 mgr.y (mgr.44107) 357 : audit [DBG] from='client.44515 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:36.487628+0000 mgr.y (mgr.44107) 358 : audit [DBG] from='client.34430 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:36.487628+0000 mgr.y (mgr.44107) 358 : audit [DBG] from='client.34430 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:36.674586+0000 mgr.y (mgr.44107) 359 : audit [DBG] from='client.34433 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:36.674586+0000 mgr.y (mgr.44107) 359 : audit [DBG] from='client.34433 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:36.918079+0000 mon.c (mon.1) 375 : audit [DBG] from='client.? 192.168.123.100:0/1472463387' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:36.918079+0000 mon.c (mon.1) 375 : audit [DBG] from='client.? 192.168.123.100:0/1472463387' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.154169+0000 mgr.y (mgr.44107) 360 : audit [DBG] from='client.54525 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.154169+0000 mgr.y (mgr.44107) 360 : audit [DBG] from='client.54525 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.231540+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.231540+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: cephadm 2026-03-09T18:48:37.233197+0000 mgr.y (mgr.44107) 361 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: cephadm 2026-03-09T18:48:37.233197+0000 mgr.y (mgr.44107) 361 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: cephadm 2026-03-09T18:48:37.233223+0000 mgr.y (mgr.44107) 362 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: cephadm 2026-03-09T18:48:37.233223+0000 mgr.y (mgr.44107) 362 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.234299+0000 mon.c (mon.1) 376 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.234299+0000 mon.c (mon.1) 376 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.235747+0000 mon.c (mon.1) 377 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.235747+0000 mon.c (mon.1) 377 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: cephadm 2026-03-09T18:48:37.236546+0000 mgr.y (mgr.44107) 363 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:36.487628+0000 mgr.y (mgr.44107) 358 : audit [DBG] from='client.34430 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:36.487628+0000 mgr.y (mgr.44107) 358 : audit [DBG] from='client.34430 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:36.674586+0000 mgr.y (mgr.44107) 359 : audit [DBG] from='client.34433 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:36.674586+0000 mgr.y (mgr.44107) 359 : audit [DBG] from='client.34433 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:36.918079+0000 mon.c (mon.1) 375 : audit [DBG] from='client.? 192.168.123.100:0/1472463387' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:36.918079+0000 mon.c (mon.1) 375 : audit [DBG] from='client.? 192.168.123.100:0/1472463387' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.154169+0000 mgr.y (mgr.44107) 360 : audit [DBG] from='client.54525 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.154169+0000 mgr.y (mgr.44107) 360 : audit [DBG] from='client.54525 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.231540+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.231540+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: cephadm 2026-03-09T18:48:37.233197+0000 mgr.y (mgr.44107) 361 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: cephadm 2026-03-09T18:48:37.233197+0000 mgr.y (mgr.44107) 361 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: cephadm 2026-03-09T18:48:37.233223+0000 mgr.y (mgr.44107) 362 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: cephadm 2026-03-09T18:48:37.233223+0000 mgr.y (mgr.44107) 362 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.234299+0000 mon.c (mon.1) 376 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.234299+0000 mon.c (mon.1) 376 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.235747+0000 mon.c (mon.1) 377 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.235747+0000 mon.c (mon.1) 377 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: cephadm 2026-03-09T18:48:37.236546+0000 mgr.y (mgr.44107) 363 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: cephadm 2026-03-09T18:48:37.236546+0000 mgr.y (mgr.44107) 363 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.240868+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.240868+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.243243+0000 mon.c (mon.1) 378 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.243243+0000 mon.c (mon.1) 378 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: cephadm 2026-03-09T18:48:37.243812+0000 mgr.y (mgr.44107) 364 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: cephadm 2026-03-09T18:48:37.243812+0000 mgr.y (mgr.44107) 364 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.247244+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.247244+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.250830+0000 mon.c (mon.1) 379 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.250830+0000 mon.c (mon.1) 379 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: cephadm 2026-03-09T18:48:37.251477+0000 mgr.y (mgr.44107) 365 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: cephadm 2026-03-09T18:48:37.251477+0000 mgr.y (mgr.44107) 365 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.254758+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.254758+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.257069+0000 mon.c (mon.1) 380 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.257069+0000 mon.c (mon.1) 380 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: cephadm 2026-03-09T18:48:37.257652+0000 mgr.y (mgr.44107) 366 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: cephadm 2026-03-09T18:48:37.257652+0000 mgr.y (mgr.44107) 366 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.261040+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.261040+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.263957+0000 mon.c (mon.1) 381 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.263957+0000 mon.c (mon.1) 381 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: cephadm 2026-03-09T18:48:37.264536+0000 mgr.y (mgr.44107) 367 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: cephadm 2026-03-09T18:48:37.264536+0000 mgr.y (mgr.44107) 367 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.267993+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.267993+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.664921+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.664921+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.667034+0000 mon.c (mon.1) 382 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.667034+0000 mon.c (mon.1) 382 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.667276+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.667276+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.668907+0000 mon.c (mon.1) 383 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:37 vm00 bash[65531]: audit 2026-03-09T18:48:37.668907+0000 mon.c (mon.1) 383 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: cephadm 2026-03-09T18:48:37.236546+0000 mgr.y (mgr.44107) 363 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.240868+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.240868+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.243243+0000 mon.c (mon.1) 378 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.243243+0000 mon.c (mon.1) 378 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: cephadm 2026-03-09T18:48:37.243812+0000 mgr.y (mgr.44107) 364 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: cephadm 2026-03-09T18:48:37.243812+0000 mgr.y (mgr.44107) 364 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.247244+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.247244+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.250830+0000 mon.c (mon.1) 379 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.250830+0000 mon.c (mon.1) 379 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: cephadm 2026-03-09T18:48:37.251477+0000 mgr.y (mgr.44107) 365 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: cephadm 2026-03-09T18:48:37.251477+0000 mgr.y (mgr.44107) 365 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.254758+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.254758+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.257069+0000 mon.c (mon.1) 380 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.257069+0000 mon.c (mon.1) 380 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: cephadm 2026-03-09T18:48:37.257652+0000 mgr.y (mgr.44107) 366 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: cephadm 2026-03-09T18:48:37.257652+0000 mgr.y (mgr.44107) 366 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.261040+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.261040+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.263957+0000 mon.c (mon.1) 381 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.263957+0000 mon.c (mon.1) 381 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: cephadm 2026-03-09T18:48:37.264536+0000 mgr.y (mgr.44107) 367 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:48:37.880 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: cephadm 2026-03-09T18:48:37.264536+0000 mgr.y (mgr.44107) 367 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:48:37.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.267993+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.267993+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.664921+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.664921+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:37.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.667034+0000 mon.c (mon.1) 382 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:48:37.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.667034+0000 mon.c (mon.1) 382 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:48:37.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.667276+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:48:37.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.667276+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:48:37.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.668907+0000 mon.c (mon.1) 383 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:37.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:37 vm00 bash[69512]: audit 2026-03-09T18:48:37.668907+0000 mon.c (mon.1) 383 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:38.204 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:38 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:38.204 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:48:38 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:38.204 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:48:38 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:38.204 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:38 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:38.204 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:48:38 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:38.204 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:48:38 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:38.204 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:48:38 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:38.204 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:48:38 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:38.204 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:48:38 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:38.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:36.487628+0000 mgr.y (mgr.44107) 358 : audit [DBG] from='client.34430 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:38.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:36.487628+0000 mgr.y (mgr.44107) 358 : audit [DBG] from='client.34430 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:38.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:36.674586+0000 mgr.y (mgr.44107) 359 : audit [DBG] from='client.34433 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:38.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:36.674586+0000 mgr.y (mgr.44107) 359 : audit [DBG] from='client.34433 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:38.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:36.918079+0000 mon.c (mon.1) 375 : audit [DBG] from='client.? 192.168.123.100:0/1472463387' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:38.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:36.918079+0000 mon.c (mon.1) 375 : audit [DBG] from='client.? 192.168.123.100:0/1472463387' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:38.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.154169+0000 mgr.y (mgr.44107) 360 : audit [DBG] from='client.54525 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:38.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.154169+0000 mgr.y (mgr.44107) 360 : audit [DBG] from='client.54525 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:48:38.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.231540+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:38.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.231540+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:38.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: cephadm 2026-03-09T18:48:37.233197+0000 mgr.y (mgr.44107) 361 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:48:38.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: cephadm 2026-03-09T18:48:37.233197+0000 mgr.y (mgr.44107) 361 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:48:38.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: cephadm 2026-03-09T18:48:37.233223+0000 mgr.y (mgr.44107) 362 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:48:38.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: cephadm 2026-03-09T18:48:37.233223+0000 mgr.y (mgr.44107) 362 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:48:38.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.234299+0000 mon.c (mon.1) 376 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:38.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.234299+0000 mon.c (mon.1) 376 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:38.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.235747+0000 mon.c (mon.1) 377 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:38.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.235747+0000 mon.c (mon.1) 377 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:38.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: cephadm 2026-03-09T18:48:37.236546+0000 mgr.y (mgr.44107) 363 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:48:38.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: cephadm 2026-03-09T18:48:37.236546+0000 mgr.y (mgr.44107) 363 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:48:38.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.240868+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:38.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.240868+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:38.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.243243+0000 mon.c (mon.1) 378 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:38.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.243243+0000 mon.c (mon.1) 378 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: cephadm 2026-03-09T18:48:37.243812+0000 mgr.y (mgr.44107) 364 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: cephadm 2026-03-09T18:48:37.243812+0000 mgr.y (mgr.44107) 364 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.247244+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.247244+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.250830+0000 mon.c (mon.1) 379 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.250830+0000 mon.c (mon.1) 379 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: cephadm 2026-03-09T18:48:37.251477+0000 mgr.y (mgr.44107) 365 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: cephadm 2026-03-09T18:48:37.251477+0000 mgr.y (mgr.44107) 365 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.254758+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.254758+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.257069+0000 mon.c (mon.1) 380 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.257069+0000 mon.c (mon.1) 380 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: cephadm 2026-03-09T18:48:37.257652+0000 mgr.y (mgr.44107) 366 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: cephadm 2026-03-09T18:48:37.257652+0000 mgr.y (mgr.44107) 366 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.261040+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.261040+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.263957+0000 mon.c (mon.1) 381 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.263957+0000 mon.c (mon.1) 381 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: cephadm 2026-03-09T18:48:37.264536+0000 mgr.y (mgr.44107) 367 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: cephadm 2026-03-09T18:48:37.264536+0000 mgr.y (mgr.44107) 367 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.267993+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.267993+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.664921+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.664921+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.667034+0000 mon.c (mon.1) 382 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.667034+0000 mon.c (mon.1) 382 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.667276+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.667276+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm00.ygjynr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.668907+0000 mon.c (mon.1) 383 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:38.225 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:37 vm08 bash[46122]: audit 2026-03-09T18:48:37.668907+0000 mon.c (mon.1) 383 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:38.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:38 vm00 bash[65531]: cephadm 2026-03-09T18:48:37.660231+0000 mgr.y (mgr.44107) 368 : cephadm [INF] Upgrade: Updating rgw.foo.vm00.ygjynr (1/2) 2026-03-09T18:48:38.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:38 vm00 bash[65531]: cephadm 2026-03-09T18:48:37.660231+0000 mgr.y (mgr.44107) 368 : cephadm [INF] Upgrade: Updating rgw.foo.vm00.ygjynr (1/2) 2026-03-09T18:48:38.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:38 vm00 bash[65531]: cephadm 2026-03-09T18:48:37.669854+0000 mgr.y (mgr.44107) 369 : cephadm [INF] Deploying daemon rgw.foo.vm00.ygjynr on vm00 2026-03-09T18:48:38.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:38 vm00 bash[65531]: cephadm 2026-03-09T18:48:37.669854+0000 mgr.y (mgr.44107) 369 : cephadm [INF] Deploying daemon rgw.foo.vm00.ygjynr on vm00 2026-03-09T18:48:38.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:38 vm00 bash[65531]: cluster 2026-03-09T18:48:38.109201+0000 mgr.y (mgr.44107) 370 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:38.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:38 vm00 bash[65531]: cluster 2026-03-09T18:48:38.109201+0000 mgr.y (mgr.44107) 370 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:38.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:38 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:38.881 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:48:38 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:38.881 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:48:38 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:38.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:38 vm00 bash[69512]: cephadm 2026-03-09T18:48:37.660231+0000 mgr.y (mgr.44107) 368 : cephadm [INF] Upgrade: Updating rgw.foo.vm00.ygjynr (1/2) 2026-03-09T18:48:38.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:38 vm00 bash[69512]: cephadm 2026-03-09T18:48:37.660231+0000 mgr.y (mgr.44107) 368 : cephadm [INF] Upgrade: Updating rgw.foo.vm00.ygjynr (1/2) 2026-03-09T18:48:38.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:38 vm00 bash[69512]: cephadm 2026-03-09T18:48:37.669854+0000 mgr.y (mgr.44107) 369 : cephadm [INF] Deploying daemon rgw.foo.vm00.ygjynr on vm00 2026-03-09T18:48:38.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:38 vm00 bash[69512]: cephadm 2026-03-09T18:48:37.669854+0000 mgr.y (mgr.44107) 369 : cephadm [INF] Deploying daemon rgw.foo.vm00.ygjynr on vm00 2026-03-09T18:48:38.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:38 vm00 bash[69512]: cluster 2026-03-09T18:48:38.109201+0000 mgr.y (mgr.44107) 370 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:38.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:38 vm00 bash[69512]: cluster 2026-03-09T18:48:38.109201+0000 mgr.y (mgr.44107) 370 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:38.881 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:38 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:38.881 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:48:38 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:38.881 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:48:38 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:38.881 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:48:38 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:38.881 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:48:38 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:38.881 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:48:38 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:39.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:38 vm08 bash[46122]: cephadm 2026-03-09T18:48:37.660231+0000 mgr.y (mgr.44107) 368 : cephadm [INF] Upgrade: Updating rgw.foo.vm00.ygjynr (1/2) 2026-03-09T18:48:39.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:38 vm08 bash[46122]: cephadm 2026-03-09T18:48:37.660231+0000 mgr.y (mgr.44107) 368 : cephadm [INF] Upgrade: Updating rgw.foo.vm00.ygjynr (1/2) 2026-03-09T18:48:39.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:38 vm08 bash[46122]: cephadm 2026-03-09T18:48:37.669854+0000 mgr.y (mgr.44107) 369 : cephadm [INF] Deploying daemon rgw.foo.vm00.ygjynr on vm00 2026-03-09T18:48:39.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:38 vm08 bash[46122]: cephadm 2026-03-09T18:48:37.669854+0000 mgr.y (mgr.44107) 369 : cephadm [INF] Deploying daemon rgw.foo.vm00.ygjynr on vm00 2026-03-09T18:48:39.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:38 vm08 bash[46122]: cluster 2026-03-09T18:48:38.109201+0000 mgr.y (mgr.44107) 370 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:39.224 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:38 vm08 bash[46122]: cluster 2026-03-09T18:48:38.109201+0000 mgr.y (mgr.44107) 370 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:48:39.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:48:39 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:48:39] "GET /metrics HTTP/1.1" 200 37962 "" "Prometheus/2.51.0" 2026-03-09T18:48:40.146 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:39 vm08 bash[46122]: audit 2026-03-09T18:48:38.914176+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:40.146 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:39 vm08 bash[46122]: audit 2026-03-09T18:48:38.914176+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:40.146 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:39 vm08 bash[46122]: audit 2026-03-09T18:48:38.922941+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:40.146 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:39 vm08 bash[46122]: audit 2026-03-09T18:48:38.922941+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:40.146 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:39 vm08 bash[46122]: audit 2026-03-09T18:48:39.562224+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:40.146 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:39 vm08 bash[46122]: audit 2026-03-09T18:48:39.562224+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:40.146 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:39 vm08 bash[46122]: audit 2026-03-09T18:48:39.567394+0000 mon.c (mon.1) 384 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:48:40.146 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:39 vm08 bash[46122]: audit 2026-03-09T18:48:39.567394+0000 mon.c (mon.1) 384 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:48:40.146 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:39 vm08 bash[46122]: audit 2026-03-09T18:48:39.567695+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:48:40.146 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:39 vm08 bash[46122]: audit 2026-03-09T18:48:39.567695+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:48:40.146 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:39 vm08 bash[46122]: audit 2026-03-09T18:48:39.569491+0000 mon.c (mon.1) 385 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:40.146 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:39 vm08 bash[46122]: audit 2026-03-09T18:48:39.569491+0000 mon.c (mon.1) 385 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:40.146 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:40 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:40.146 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:48:40 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:40.147 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:48:40 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:40.147 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:48:40 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:40.147 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:48:40 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:40.147 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:48:40 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:40.147 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:48:40 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:40.147 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:48:40 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:40.147 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:48:40 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:40.378 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:39 vm00 bash[65531]: audit 2026-03-09T18:48:38.914176+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:40.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:39 vm00 bash[65531]: audit 2026-03-09T18:48:38.914176+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:40.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:39 vm00 bash[65531]: audit 2026-03-09T18:48:38.922941+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:40.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:39 vm00 bash[65531]: audit 2026-03-09T18:48:38.922941+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:40.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:39 vm00 bash[65531]: audit 2026-03-09T18:48:39.562224+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:40.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:39 vm00 bash[65531]: audit 2026-03-09T18:48:39.562224+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:40.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:39 vm00 bash[65531]: audit 2026-03-09T18:48:39.567394+0000 mon.c (mon.1) 384 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:48:40.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:39 vm00 bash[65531]: audit 2026-03-09T18:48:39.567394+0000 mon.c (mon.1) 384 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:48:40.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:39 vm00 bash[65531]: audit 2026-03-09T18:48:39.567695+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:48:40.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:39 vm00 bash[65531]: audit 2026-03-09T18:48:39.567695+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:48:40.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:39 vm00 bash[65531]: audit 2026-03-09T18:48:39.569491+0000 mon.c (mon.1) 385 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:40.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:39 vm00 bash[65531]: audit 2026-03-09T18:48:39.569491+0000 mon.c (mon.1) 385 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:40.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:39 vm00 bash[69512]: audit 2026-03-09T18:48:38.914176+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:40.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:39 vm00 bash[69512]: audit 2026-03-09T18:48:38.914176+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:40.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:39 vm00 bash[69512]: audit 2026-03-09T18:48:38.922941+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:40.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:39 vm00 bash[69512]: audit 2026-03-09T18:48:38.922941+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:40.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:39 vm00 bash[69512]: audit 2026-03-09T18:48:39.562224+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:40.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:39 vm00 bash[69512]: audit 2026-03-09T18:48:39.562224+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:40.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:39 vm00 bash[69512]: audit 2026-03-09T18:48:39.567394+0000 mon.c (mon.1) 384 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:48:40.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:39 vm00 bash[69512]: audit 2026-03-09T18:48:39.567394+0000 mon.c (mon.1) 384 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:48:40.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:39 vm00 bash[69512]: audit 2026-03-09T18:48:39.567695+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:48:40.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:39 vm00 bash[69512]: audit 2026-03-09T18:48:39.567695+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm08.rcuedn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T18:48:40.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:39 vm00 bash[69512]: audit 2026-03-09T18:48:39.569491+0000 mon.c (mon.1) 385 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:40.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:39 vm00 bash[69512]: audit 2026-03-09T18:48:39.569491+0000 mon.c (mon.1) 385 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:40.693 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:40 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:40.693 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:48:40 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:40.693 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:48:40 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:40.693 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:48:40 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:40.693 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:48:40 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:40.693 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:48:40 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:40.695 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:48:40 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:40.695 INFO:journalctl@ceph.prometheus.a.vm08.stdout:Mar 09 18:48:40 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:40.695 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:48:40 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:48:40.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:40 vm08 bash[46122]: cephadm 2026-03-09T18:48:39.557102+0000 mgr.y (mgr.44107) 371 : cephadm [INF] Upgrade: Updating rgw.foo.vm08.rcuedn (2/2) 2026-03-09T18:48:40.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:40 vm08 bash[46122]: cephadm 2026-03-09T18:48:39.557102+0000 mgr.y (mgr.44107) 371 : cephadm [INF] Upgrade: Updating rgw.foo.vm08.rcuedn (2/2) 2026-03-09T18:48:40.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:40 vm08 bash[46122]: cephadm 2026-03-09T18:48:39.570331+0000 mgr.y (mgr.44107) 372 : cephadm [INF] Deploying daemon rgw.foo.vm08.rcuedn on vm08 2026-03-09T18:48:40.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:40 vm08 bash[46122]: cephadm 2026-03-09T18:48:39.570331+0000 mgr.y (mgr.44107) 372 : cephadm [INF] Deploying daemon rgw.foo.vm08.rcuedn on vm08 2026-03-09T18:48:40.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:40 vm08 bash[46122]: cluster 2026-03-09T18:48:40.109628+0000 mgr.y (mgr.44107) 373 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 12 KiB/s rd, 85 B/s wr, 17 op/s 2026-03-09T18:48:40.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:40 vm08 bash[46122]: cluster 2026-03-09T18:48:40.109628+0000 mgr.y (mgr.44107) 373 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 12 KiB/s rd, 85 B/s wr, 17 op/s 2026-03-09T18:48:40.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:40 vm08 bash[46122]: audit 2026-03-09T18:48:40.729166+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:40.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:40 vm08 bash[46122]: audit 2026-03-09T18:48:40.729166+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:40.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:40 vm08 bash[46122]: audit 2026-03-09T18:48:40.738591+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:40.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:40 vm08 bash[46122]: audit 2026-03-09T18:48:40.738591+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:40.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:40 vm08 bash[46122]: audit 2026-03-09T18:48:40.742204+0000 mon.c (mon.1) 386 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:40.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:40 vm08 bash[46122]: audit 2026-03-09T18:48:40.742204+0000 mon.c (mon.1) 386 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:40 vm00 bash[65531]: cephadm 2026-03-09T18:48:39.557102+0000 mgr.y (mgr.44107) 371 : cephadm [INF] Upgrade: Updating rgw.foo.vm08.rcuedn (2/2) 2026-03-09T18:48:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:40 vm00 bash[65531]: cephadm 2026-03-09T18:48:39.557102+0000 mgr.y (mgr.44107) 371 : cephadm [INF] Upgrade: Updating rgw.foo.vm08.rcuedn (2/2) 2026-03-09T18:48:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:40 vm00 bash[65531]: cephadm 2026-03-09T18:48:39.570331+0000 mgr.y (mgr.44107) 372 : cephadm [INF] Deploying daemon rgw.foo.vm08.rcuedn on vm08 2026-03-09T18:48:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:40 vm00 bash[65531]: cephadm 2026-03-09T18:48:39.570331+0000 mgr.y (mgr.44107) 372 : cephadm [INF] Deploying daemon rgw.foo.vm08.rcuedn on vm08 2026-03-09T18:48:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:40 vm00 bash[65531]: cluster 2026-03-09T18:48:40.109628+0000 mgr.y (mgr.44107) 373 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 12 KiB/s rd, 85 B/s wr, 17 op/s 2026-03-09T18:48:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:40 vm00 bash[65531]: cluster 2026-03-09T18:48:40.109628+0000 mgr.y (mgr.44107) 373 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 12 KiB/s rd, 85 B/s wr, 17 op/s 2026-03-09T18:48:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:40 vm00 bash[65531]: audit 2026-03-09T18:48:40.729166+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:40 vm00 bash[65531]: audit 2026-03-09T18:48:40.729166+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:40 vm00 bash[65531]: audit 2026-03-09T18:48:40.738591+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:40 vm00 bash[65531]: audit 2026-03-09T18:48:40.738591+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:40 vm00 bash[65531]: audit 2026-03-09T18:48:40.742204+0000 mon.c (mon.1) 386 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:41.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:40 vm00 bash[65531]: audit 2026-03-09T18:48:40.742204+0000 mon.c (mon.1) 386 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:41.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:40 vm00 bash[69512]: cephadm 2026-03-09T18:48:39.557102+0000 mgr.y (mgr.44107) 371 : cephadm [INF] Upgrade: Updating rgw.foo.vm08.rcuedn (2/2) 2026-03-09T18:48:41.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:40 vm00 bash[69512]: cephadm 2026-03-09T18:48:39.557102+0000 mgr.y (mgr.44107) 371 : cephadm [INF] Upgrade: Updating rgw.foo.vm08.rcuedn (2/2) 2026-03-09T18:48:41.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:40 vm00 bash[69512]: cephadm 2026-03-09T18:48:39.570331+0000 mgr.y (mgr.44107) 372 : cephadm [INF] Deploying daemon rgw.foo.vm08.rcuedn on vm08 2026-03-09T18:48:41.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:40 vm00 bash[69512]: cephadm 2026-03-09T18:48:39.570331+0000 mgr.y (mgr.44107) 372 : cephadm [INF] Deploying daemon rgw.foo.vm08.rcuedn on vm08 2026-03-09T18:48:41.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:40 vm00 bash[69512]: cluster 2026-03-09T18:48:40.109628+0000 mgr.y (mgr.44107) 373 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 12 KiB/s rd, 85 B/s wr, 17 op/s 2026-03-09T18:48:41.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:40 vm00 bash[69512]: cluster 2026-03-09T18:48:40.109628+0000 mgr.y (mgr.44107) 373 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 276 MiB used, 160 GiB / 160 GiB avail; 12 KiB/s rd, 85 B/s wr, 17 op/s 2026-03-09T18:48:41.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:40 vm00 bash[69512]: audit 2026-03-09T18:48:40.729166+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:41.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:40 vm00 bash[69512]: audit 2026-03-09T18:48:40.729166+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:41.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:40 vm00 bash[69512]: audit 2026-03-09T18:48:40.738591+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:41.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:40 vm00 bash[69512]: audit 2026-03-09T18:48:40.738591+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:41.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:40 vm00 bash[69512]: audit 2026-03-09T18:48:40.742204+0000 mon.c (mon.1) 386 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:41.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:40 vm00 bash[69512]: audit 2026-03-09T18:48:40.742204+0000 mon.c (mon.1) 386 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:43.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:43 vm08 bash[46122]: audit 2026-03-09T18:48:41.645538+0000 mgr.y (mgr.44107) 374 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:43.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:43 vm08 bash[46122]: audit 2026-03-09T18:48:41.645538+0000 mgr.y (mgr.44107) 374 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:43.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:43 vm08 bash[46122]: cluster 2026-03-09T18:48:42.110137+0000 mgr.y (mgr.44107) 375 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 280 MiB used, 160 GiB / 160 GiB avail; 68 KiB/s rd, 85 B/s wr, 103 op/s 2026-03-09T18:48:43.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:43 vm08 bash[46122]: cluster 2026-03-09T18:48:42.110137+0000 mgr.y (mgr.44107) 375 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 280 MiB used, 160 GiB / 160 GiB avail; 68 KiB/s rd, 85 B/s wr, 103 op/s 2026-03-09T18:48:43.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:43 vm00 bash[69512]: audit 2026-03-09T18:48:41.645538+0000 mgr.y (mgr.44107) 374 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:43.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:43 vm00 bash[69512]: audit 2026-03-09T18:48:41.645538+0000 mgr.y (mgr.44107) 374 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:43.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:43 vm00 bash[69512]: cluster 2026-03-09T18:48:42.110137+0000 mgr.y (mgr.44107) 375 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 280 MiB used, 160 GiB / 160 GiB avail; 68 KiB/s rd, 85 B/s wr, 103 op/s 2026-03-09T18:48:43.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:43 vm00 bash[69512]: cluster 2026-03-09T18:48:42.110137+0000 mgr.y (mgr.44107) 375 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 280 MiB used, 160 GiB / 160 GiB avail; 68 KiB/s rd, 85 B/s wr, 103 op/s 2026-03-09T18:48:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:43 vm00 bash[65531]: audit 2026-03-09T18:48:41.645538+0000 mgr.y (mgr.44107) 374 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:43 vm00 bash[65531]: audit 2026-03-09T18:48:41.645538+0000 mgr.y (mgr.44107) 374 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:43 vm00 bash[65531]: cluster 2026-03-09T18:48:42.110137+0000 mgr.y (mgr.44107) 375 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 280 MiB used, 160 GiB / 160 GiB avail; 68 KiB/s rd, 85 B/s wr, 103 op/s 2026-03-09T18:48:43.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:43 vm00 bash[65531]: cluster 2026-03-09T18:48:42.110137+0000 mgr.y (mgr.44107) 375 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 280 MiB used, 160 GiB / 160 GiB avail; 68 KiB/s rd, 85 B/s wr, 103 op/s 2026-03-09T18:48:45.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:45 vm08 bash[46122]: cluster 2026-03-09T18:48:44.110473+0000 mgr.y (mgr.44107) 376 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 280 MiB used, 160 GiB / 160 GiB avail; 67 KiB/s rd, 85 B/s wr, 102 op/s 2026-03-09T18:48:45.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:45 vm08 bash[46122]: cluster 2026-03-09T18:48:44.110473+0000 mgr.y (mgr.44107) 376 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 280 MiB used, 160 GiB / 160 GiB avail; 67 KiB/s rd, 85 B/s wr, 102 op/s 2026-03-09T18:48:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:45 vm00 bash[65531]: cluster 2026-03-09T18:48:44.110473+0000 mgr.y (mgr.44107) 376 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 280 MiB used, 160 GiB / 160 GiB avail; 67 KiB/s rd, 85 B/s wr, 102 op/s 2026-03-09T18:48:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:45 vm00 bash[65531]: cluster 2026-03-09T18:48:44.110473+0000 mgr.y (mgr.44107) 376 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 280 MiB used, 160 GiB / 160 GiB avail; 67 KiB/s rd, 85 B/s wr, 102 op/s 2026-03-09T18:48:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:45 vm00 bash[69512]: cluster 2026-03-09T18:48:44.110473+0000 mgr.y (mgr.44107) 376 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 280 MiB used, 160 GiB / 160 GiB avail; 67 KiB/s rd, 85 B/s wr, 102 op/s 2026-03-09T18:48:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:45 vm00 bash[69512]: cluster 2026-03-09T18:48:44.110473+0000 mgr.y (mgr.44107) 376 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 280 MiB used, 160 GiB / 160 GiB avail; 67 KiB/s rd, 85 B/s wr, 102 op/s 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:47 vm00 bash[65531]: cluster 2026-03-09T18:48:46.110935+0000 mgr.y (mgr.44107) 377 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 104 KiB/s rd, 170 B/s wr, 160 op/s 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:47 vm00 bash[65531]: cluster 2026-03-09T18:48:46.110935+0000 mgr.y (mgr.44107) 377 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 104 KiB/s rd, 170 B/s wr, 160 op/s 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:47 vm00 bash[65531]: audit 2026-03-09T18:48:46.118886+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:47 vm00 bash[65531]: audit 2026-03-09T18:48:46.118886+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:47 vm00 bash[65531]: audit 2026-03-09T18:48:46.130082+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:47 vm00 bash[65531]: audit 2026-03-09T18:48:46.130082+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:47 vm00 bash[65531]: audit 2026-03-09T18:48:46.146559+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:47 vm00 bash[65531]: audit 2026-03-09T18:48:46.146559+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:47 vm00 bash[65531]: audit 2026-03-09T18:48:46.153661+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:47 vm00 bash[65531]: audit 2026-03-09T18:48:46.153661+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:47 vm00 bash[65531]: audit 2026-03-09T18:48:46.716242+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:47 vm00 bash[65531]: audit 2026-03-09T18:48:46.716242+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:47 vm00 bash[65531]: audit 2026-03-09T18:48:46.726498+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:47 vm00 bash[65531]: audit 2026-03-09T18:48:46.726498+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:47 vm00 bash[65531]: audit 2026-03-09T18:48:46.736853+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:47 vm00 bash[65531]: audit 2026-03-09T18:48:46.736853+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:47 vm00 bash[65531]: audit 2026-03-09T18:48:46.745025+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:47 vm00 bash[65531]: audit 2026-03-09T18:48:46.745025+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:47 vm00 bash[69512]: cluster 2026-03-09T18:48:46.110935+0000 mgr.y (mgr.44107) 377 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 104 KiB/s rd, 170 B/s wr, 160 op/s 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:47 vm00 bash[69512]: cluster 2026-03-09T18:48:46.110935+0000 mgr.y (mgr.44107) 377 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 104 KiB/s rd, 170 B/s wr, 160 op/s 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:47 vm00 bash[69512]: audit 2026-03-09T18:48:46.118886+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:47 vm00 bash[69512]: audit 2026-03-09T18:48:46.118886+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:47 vm00 bash[69512]: audit 2026-03-09T18:48:46.130082+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:47 vm00 bash[69512]: audit 2026-03-09T18:48:46.130082+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:47 vm00 bash[69512]: audit 2026-03-09T18:48:46.146559+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:47 vm00 bash[69512]: audit 2026-03-09T18:48:46.146559+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:47 vm00 bash[69512]: audit 2026-03-09T18:48:46.153661+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:47 vm00 bash[69512]: audit 2026-03-09T18:48:46.153661+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:47 vm00 bash[69512]: audit 2026-03-09T18:48:46.716242+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:47 vm00 bash[69512]: audit 2026-03-09T18:48:46.716242+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:47 vm00 bash[69512]: audit 2026-03-09T18:48:46.726498+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:47 vm00 bash[69512]: audit 2026-03-09T18:48:46.726498+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:47 vm00 bash[69512]: audit 2026-03-09T18:48:46.736853+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:47 vm00 bash[69512]: audit 2026-03-09T18:48:46.736853+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:47 vm00 bash[69512]: audit 2026-03-09T18:48:46.745025+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.379 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:47 vm00 bash[69512]: audit 2026-03-09T18:48:46.745025+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:47 vm08 bash[46122]: cluster 2026-03-09T18:48:46.110935+0000 mgr.y (mgr.44107) 377 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 104 KiB/s rd, 170 B/s wr, 160 op/s 2026-03-09T18:48:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:47 vm08 bash[46122]: cluster 2026-03-09T18:48:46.110935+0000 mgr.y (mgr.44107) 377 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 104 KiB/s rd, 170 B/s wr, 160 op/s 2026-03-09T18:48:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:47 vm08 bash[46122]: audit 2026-03-09T18:48:46.118886+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:47 vm08 bash[46122]: audit 2026-03-09T18:48:46.118886+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:47 vm08 bash[46122]: audit 2026-03-09T18:48:46.130082+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:47 vm08 bash[46122]: audit 2026-03-09T18:48:46.130082+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:47 vm08 bash[46122]: audit 2026-03-09T18:48:46.146559+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:47 vm08 bash[46122]: audit 2026-03-09T18:48:46.146559+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:47 vm08 bash[46122]: audit 2026-03-09T18:48:46.153661+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:47 vm08 bash[46122]: audit 2026-03-09T18:48:46.153661+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:47 vm08 bash[46122]: audit 2026-03-09T18:48:46.716242+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:47 vm08 bash[46122]: audit 2026-03-09T18:48:46.716242+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:47 vm08 bash[46122]: audit 2026-03-09T18:48:46.726498+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:47 vm08 bash[46122]: audit 2026-03-09T18:48:46.726498+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:47 vm08 bash[46122]: audit 2026-03-09T18:48:46.736853+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:47 vm08 bash[46122]: audit 2026-03-09T18:48:46.736853+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:47 vm08 bash[46122]: audit 2026-03-09T18:48:46.745025+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:47.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:47 vm08 bash[46122]: audit 2026-03-09T18:48:46.745025+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:48.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:48 vm08 bash[46122]: audit 2026-03-09T18:48:48.102453+0000 mon.c (mon.1) 387 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:48:48.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:48 vm08 bash[46122]: audit 2026-03-09T18:48:48.102453+0000 mon.c (mon.1) 387 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:48:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:48 vm00 bash[65531]: audit 2026-03-09T18:48:48.102453+0000 mon.c (mon.1) 387 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:48:48.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:48 vm00 bash[65531]: audit 2026-03-09T18:48:48.102453+0000 mon.c (mon.1) 387 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:48:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:48 vm00 bash[69512]: audit 2026-03-09T18:48:48.102453+0000 mon.c (mon.1) 387 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:48:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:48 vm00 bash[69512]: audit 2026-03-09T18:48:48.102453+0000 mon.c (mon.1) 387 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:48:49.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:49 vm08 bash[46122]: cluster 2026-03-09T18:48:48.111931+0000 mgr.y (mgr.44107) 378 : cluster [DBG] pgmap v198: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 109 KiB/s rd, 170 B/s wr, 167 op/s 2026-03-09T18:48:49.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:49 vm08 bash[46122]: cluster 2026-03-09T18:48:48.111931+0000 mgr.y (mgr.44107) 378 : cluster [DBG] pgmap v198: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 109 KiB/s rd, 170 B/s wr, 167 op/s 2026-03-09T18:48:49.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:49 vm08 bash[46122]: audit 2026-03-09T18:48:48.338047+0000 mon.c (mon.1) 388 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2]}]: dispatch 2026-03-09T18:48:49.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:49 vm08 bash[46122]: audit 2026-03-09T18:48:48.338047+0000 mon.c (mon.1) 388 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2]}]: dispatch 2026-03-09T18:48:49.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:49 vm08 bash[46122]: audit 2026-03-09T18:48:48.338415+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2]}]: dispatch 2026-03-09T18:48:49.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:49 vm08 bash[46122]: audit 2026-03-09T18:48:48.338415+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2]}]: dispatch 2026-03-09T18:48:49.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:49 vm08 bash[46122]: audit 2026-03-09T18:48:48.338655+0000 mon.c (mon.1) 389 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "3.2"}]: dispatch 2026-03-09T18:48:49.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:49 vm08 bash[46122]: audit 2026-03-09T18:48:48.338655+0000 mon.c (mon.1) 389 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "3.2"}]: dispatch 2026-03-09T18:48:49.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:49 vm08 bash[46122]: audit 2026-03-09T18:48:48.338863+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "3.2"}]: dispatch 2026-03-09T18:48:49.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:49 vm08 bash[46122]: audit 2026-03-09T18:48:48.338863+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "3.2"}]: dispatch 2026-03-09T18:48:49.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:49 vm08 bash[46122]: audit 2026-03-09T18:48:48.339251+0000 mon.c (mon.1) 390 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "4.19"}]: dispatch 2026-03-09T18:48:49.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:49 vm08 bash[46122]: audit 2026-03-09T18:48:48.339251+0000 mon.c (mon.1) 390 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "4.19"}]: dispatch 2026-03-09T18:48:49.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:49 vm08 bash[46122]: audit 2026-03-09T18:48:48.339432+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "4.19"}]: dispatch 2026-03-09T18:48:49.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:49 vm08 bash[46122]: audit 2026-03-09T18:48:48.339432+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "4.19"}]: dispatch 2026-03-09T18:48:49.524 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:49 vm00 bash[69512]: cluster 2026-03-09T18:48:48.111931+0000 mgr.y (mgr.44107) 378 : cluster [DBG] pgmap v198: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 109 KiB/s rd, 170 B/s wr, 167 op/s 2026-03-09T18:48:49.524 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:49 vm00 bash[69512]: cluster 2026-03-09T18:48:48.111931+0000 mgr.y (mgr.44107) 378 : cluster [DBG] pgmap v198: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 109 KiB/s rd, 170 B/s wr, 167 op/s 2026-03-09T18:48:49.524 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:49 vm00 bash[69512]: audit 2026-03-09T18:48:48.338047+0000 mon.c (mon.1) 388 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2]}]: dispatch 2026-03-09T18:48:49.524 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:49 vm00 bash[69512]: audit 2026-03-09T18:48:48.338047+0000 mon.c (mon.1) 388 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2]}]: dispatch 2026-03-09T18:48:49.524 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:49 vm00 bash[69512]: audit 2026-03-09T18:48:48.338415+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2]}]: dispatch 2026-03-09T18:48:49.524 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:49 vm00 bash[69512]: audit 2026-03-09T18:48:48.338415+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2]}]: dispatch 2026-03-09T18:48:49.524 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:49 vm00 bash[69512]: audit 2026-03-09T18:48:48.338655+0000 mon.c (mon.1) 389 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "3.2"}]: dispatch 2026-03-09T18:48:49.524 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:49 vm00 bash[69512]: audit 2026-03-09T18:48:48.338655+0000 mon.c (mon.1) 389 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "3.2"}]: dispatch 2026-03-09T18:48:49.524 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:49 vm00 bash[69512]: audit 2026-03-09T18:48:48.338863+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "3.2"}]: dispatch 2026-03-09T18:48:49.524 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:49 vm00 bash[69512]: audit 2026-03-09T18:48:48.338863+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "3.2"}]: dispatch 2026-03-09T18:48:49.524 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:49 vm00 bash[69512]: audit 2026-03-09T18:48:48.339251+0000 mon.c (mon.1) 390 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "4.19"}]: dispatch 2026-03-09T18:48:49.524 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:49 vm00 bash[69512]: audit 2026-03-09T18:48:48.339251+0000 mon.c (mon.1) 390 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "4.19"}]: dispatch 2026-03-09T18:48:49.524 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:49 vm00 bash[69512]: audit 2026-03-09T18:48:48.339432+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "4.19"}]: dispatch 2026-03-09T18:48:49.524 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:49 vm00 bash[69512]: audit 2026-03-09T18:48:48.339432+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "4.19"}]: dispatch 2026-03-09T18:48:49.524 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:49 vm00 bash[65531]: cluster 2026-03-09T18:48:48.111931+0000 mgr.y (mgr.44107) 378 : cluster [DBG] pgmap v198: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 109 KiB/s rd, 170 B/s wr, 167 op/s 2026-03-09T18:48:49.524 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:49 vm00 bash[65531]: cluster 2026-03-09T18:48:48.111931+0000 mgr.y (mgr.44107) 378 : cluster [DBG] pgmap v198: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 109 KiB/s rd, 170 B/s wr, 167 op/s 2026-03-09T18:48:49.524 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:49 vm00 bash[65531]: audit 2026-03-09T18:48:48.338047+0000 mon.c (mon.1) 388 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2]}]: dispatch 2026-03-09T18:48:49.524 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:49 vm00 bash[65531]: audit 2026-03-09T18:48:48.338047+0000 mon.c (mon.1) 388 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2]}]: dispatch 2026-03-09T18:48:49.524 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:49 vm00 bash[65531]: audit 2026-03-09T18:48:48.338415+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2]}]: dispatch 2026-03-09T18:48:49.524 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:49 vm00 bash[65531]: audit 2026-03-09T18:48:48.338415+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2]}]: dispatch 2026-03-09T18:48:49.524 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:49 vm00 bash[65531]: audit 2026-03-09T18:48:48.338655+0000 mon.c (mon.1) 389 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "3.2"}]: dispatch 2026-03-09T18:48:49.524 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:49 vm00 bash[65531]: audit 2026-03-09T18:48:48.338655+0000 mon.c (mon.1) 389 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "3.2"}]: dispatch 2026-03-09T18:48:49.524 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:49 vm00 bash[65531]: audit 2026-03-09T18:48:48.338863+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "3.2"}]: dispatch 2026-03-09T18:48:49.524 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:49 vm00 bash[65531]: audit 2026-03-09T18:48:48.338863+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "3.2"}]: dispatch 2026-03-09T18:48:49.524 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:49 vm00 bash[65531]: audit 2026-03-09T18:48:48.339251+0000 mon.c (mon.1) 390 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "4.19"}]: dispatch 2026-03-09T18:48:49.525 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:49 vm00 bash[65531]: audit 2026-03-09T18:48:48.339251+0000 mon.c (mon.1) 390 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "4.19"}]: dispatch 2026-03-09T18:48:49.525 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:49 vm00 bash[65531]: audit 2026-03-09T18:48:48.339432+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "4.19"}]: dispatch 2026-03-09T18:48:49.525 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:49 vm00 bash[65531]: audit 2026-03-09T18:48:48.339432+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "4.19"}]: dispatch 2026-03-09T18:48:49.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:48:49 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:48:49] "GET /metrics HTTP/1.1" 200 37991 "" "Prometheus/2.51.0" 2026-03-09T18:48:50.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:50 vm08 bash[46122]: audit 2026-03-09T18:48:49.143352+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2]}]': finished 2026-03-09T18:48:50.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:50 vm08 bash[46122]: audit 2026-03-09T18:48:49.143352+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2]}]': finished 2026-03-09T18:48:50.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:50 vm08 bash[46122]: audit 2026-03-09T18:48:49.143537+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "3.2"}]': finished 2026-03-09T18:48:50.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:50 vm08 bash[46122]: audit 2026-03-09T18:48:49.143537+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "3.2"}]': finished 2026-03-09T18:48:50.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:50 vm08 bash[46122]: audit 2026-03-09T18:48:49.143671+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "4.19"}]': finished 2026-03-09T18:48:50.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:50 vm08 bash[46122]: audit 2026-03-09T18:48:49.143671+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "4.19"}]': finished 2026-03-09T18:48:50.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:50 vm08 bash[46122]: cluster 2026-03-09T18:48:49.159629+0000 mon.a (mon.0) 588 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-09T18:48:50.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:50 vm08 bash[46122]: cluster 2026-03-09T18:48:49.159629+0000 mon.a (mon.0) 588 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-09T18:48:50.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:50 vm00 bash[65531]: audit 2026-03-09T18:48:49.143352+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2]}]': finished 2026-03-09T18:48:50.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:50 vm00 bash[65531]: audit 2026-03-09T18:48:49.143352+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2]}]': finished 2026-03-09T18:48:50.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:50 vm00 bash[65531]: audit 2026-03-09T18:48:49.143537+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "3.2"}]': finished 2026-03-09T18:48:50.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:50 vm00 bash[65531]: audit 2026-03-09T18:48:49.143537+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "3.2"}]': finished 2026-03-09T18:48:50.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:50 vm00 bash[65531]: audit 2026-03-09T18:48:49.143671+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "4.19"}]': finished 2026-03-09T18:48:50.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:50 vm00 bash[65531]: audit 2026-03-09T18:48:49.143671+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "4.19"}]': finished 2026-03-09T18:48:50.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:50 vm00 bash[65531]: cluster 2026-03-09T18:48:49.159629+0000 mon.a (mon.0) 588 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-09T18:48:50.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:50 vm00 bash[65531]: cluster 2026-03-09T18:48:49.159629+0000 mon.a (mon.0) 588 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-09T18:48:50.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:50 vm00 bash[69512]: audit 2026-03-09T18:48:49.143352+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2]}]': finished 2026-03-09T18:48:50.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:50 vm00 bash[69512]: audit 2026-03-09T18:48:49.143352+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 2]}]': finished 2026-03-09T18:48:50.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:50 vm00 bash[69512]: audit 2026-03-09T18:48:49.143537+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "3.2"}]': finished 2026-03-09T18:48:50.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:50 vm00 bash[69512]: audit 2026-03-09T18:48:49.143537+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "3.2"}]': finished 2026-03-09T18:48:50.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:50 vm00 bash[69512]: audit 2026-03-09T18:48:49.143671+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "4.19"}]': finished 2026-03-09T18:48:50.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:50 vm00 bash[69512]: audit 2026-03-09T18:48:49.143671+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "4.19"}]': finished 2026-03-09T18:48:50.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:50 vm00 bash[69512]: cluster 2026-03-09T18:48:49.159629+0000 mon.a (mon.0) 588 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-09T18:48:50.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:50 vm00 bash[69512]: cluster 2026-03-09T18:48:49.159629+0000 mon.a (mon.0) 588 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-09T18:48:51.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:51 vm08 bash[46122]: cluster 2026-03-09T18:48:50.114854+0000 mgr.y (mgr.44107) 379 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 117 KiB/s rd, 102 B/s wr, 181 op/s 2026-03-09T18:48:51.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:51 vm08 bash[46122]: cluster 2026-03-09T18:48:50.114854+0000 mgr.y (mgr.44107) 379 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 117 KiB/s rd, 102 B/s wr, 181 op/s 2026-03-09T18:48:51.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:51 vm08 bash[46122]: cluster 2026-03-09T18:48:50.149242+0000 mon.a (mon.0) 589 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-09T18:48:51.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:51 vm08 bash[46122]: cluster 2026-03-09T18:48:50.149242+0000 mon.a (mon.0) 589 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-09T18:48:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:51 vm00 bash[69512]: cluster 2026-03-09T18:48:50.114854+0000 mgr.y (mgr.44107) 379 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 117 KiB/s rd, 102 B/s wr, 181 op/s 2026-03-09T18:48:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:51 vm00 bash[69512]: cluster 2026-03-09T18:48:50.114854+0000 mgr.y (mgr.44107) 379 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 117 KiB/s rd, 102 B/s wr, 181 op/s 2026-03-09T18:48:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:51 vm00 bash[69512]: cluster 2026-03-09T18:48:50.149242+0000 mon.a (mon.0) 589 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-09T18:48:51.538 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:51 vm00 bash[69512]: cluster 2026-03-09T18:48:50.149242+0000 mon.a (mon.0) 589 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-09T18:48:51.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:51 vm00 bash[65531]: cluster 2026-03-09T18:48:50.114854+0000 mgr.y (mgr.44107) 379 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 117 KiB/s rd, 102 B/s wr, 181 op/s 2026-03-09T18:48:51.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:51 vm00 bash[65531]: cluster 2026-03-09T18:48:50.114854+0000 mgr.y (mgr.44107) 379 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 117 KiB/s rd, 102 B/s wr, 181 op/s 2026-03-09T18:48:51.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:51 vm00 bash[65531]: cluster 2026-03-09T18:48:50.149242+0000 mon.a (mon.0) 589 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-09T18:48:51.538 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:51 vm00 bash[65531]: cluster 2026-03-09T18:48:50.149242+0000 mon.a (mon.0) 589 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-09T18:48:52.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:52 vm08 bash[46122]: cluster 2026-03-09T18:48:52.171867+0000 mon.a (mon.0) 590 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY) 2026-03-09T18:48:52.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:52 vm08 bash[46122]: cluster 2026-03-09T18:48:52.171867+0000 mon.a (mon.0) 590 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY) 2026-03-09T18:48:52.535 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:52 vm00 bash[65531]: cluster 2026-03-09T18:48:52.171867+0000 mon.a (mon.0) 590 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY) 2026-03-09T18:48:52.535 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:52 vm00 bash[65531]: cluster 2026-03-09T18:48:52.171867+0000 mon.a (mon.0) 590 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY) 2026-03-09T18:48:52.535 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:52 vm00 bash[69512]: cluster 2026-03-09T18:48:52.171867+0000 mon.a (mon.0) 590 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY) 2026-03-09T18:48:52.535 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:52 vm00 bash[69512]: cluster 2026-03-09T18:48:52.171867+0000 mon.a (mon.0) 590 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY) 2026-03-09T18:48:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:51.653564+0000 mgr.y (mgr.44107) 380 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:51.653564+0000 mgr.y (mgr.44107) 380 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: cluster 2026-03-09T18:48:52.115426+0000 mgr.y (mgr.44107) 381 : cluster [DBG] pgmap v202: 161 pgs: 1 activating, 160 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 63 KiB/s rd, 127 B/s wr, 98 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:48:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: cluster 2026-03-09T18:48:52.115426+0000 mgr.y (mgr.44107) 381 : cluster [DBG] pgmap v202: 161 pgs: 1 activating, 160 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 63 KiB/s rd, 127 B/s wr, 98 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:48:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.335377+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.335377+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.341421+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.341421+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.348515+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.348515+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.353745+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.353745+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.356330+0000 mon.c (mon.1) 391 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.356330+0000 mon.c (mon.1) 391 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.357331+0000 mon.c (mon.1) 392 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.357331+0000 mon.c (mon.1) 392 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.361882+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.361882+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.403800+0000 mon.c (mon.1) 393 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:53.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.403800+0000 mon.c (mon.1) 393 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.405365+0000 mon.c (mon.1) 394 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.405365+0000 mon.c (mon.1) 394 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.406494+0000 mon.c (mon.1) 395 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.406494+0000 mon.c (mon.1) 395 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.407427+0000 mon.c (mon.1) 396 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.407427+0000 mon.c (mon.1) 396 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.408711+0000 mon.c (mon.1) 397 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.408711+0000 mon.c (mon.1) 397 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.410462+0000 mon.c (mon.1) 398 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.410462+0000 mon.c (mon.1) 398 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.411749+0000 mon.c (mon.1) 399 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.411749+0000 mon.c (mon.1) 399 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: cephadm 2026-03-09T18:48:52.412497+0000 mgr.y (mgr.44107) 382 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: cephadm 2026-03-09T18:48:52.412497+0000 mgr.y (mgr.44107) 382 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.417131+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.417131+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.419588+0000 mon.c (mon.1) 400 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm00.ygjynr"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.419588+0000 mon.c (mon.1) 400 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm00.ygjynr"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.419808+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm00.ygjynr"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.419808+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm00.ygjynr"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.423466+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm00.ygjynr"}]': finished 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.423466+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm00.ygjynr"}]': finished 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.425995+0000 mon.c (mon.1) 401 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm08.rcuedn"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.425995+0000 mon.c (mon.1) 401 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm08.rcuedn"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.426205+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm08.rcuedn"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.426205+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm08.rcuedn"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.429723+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm08.rcuedn"}]': finished 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.429723+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm08.rcuedn"}]': finished 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.432646+0000 mon.c (mon.1) 402 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.432646+0000 mon.c (mon.1) 402 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: cephadm 2026-03-09T18:48:52.433474+0000 mgr.y (mgr.44107) 383 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: cephadm 2026-03-09T18:48:52.433474+0000 mgr.y (mgr.44107) 383 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.437480+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.437480+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.440275+0000 mon.c (mon.1) 403 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.440275+0000 mon.c (mon.1) 403 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.441413+0000 mon.c (mon.1) 404 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.441413+0000 mon.c (mon.1) 404 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: cephadm 2026-03-09T18:48:52.442136+0000 mgr.y (mgr.44107) 384 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: cephadm 2026-03-09T18:48:52.442136+0000 mgr.y (mgr.44107) 384 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.446501+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.446501+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.450125+0000 mon.c (mon.1) 405 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.450125+0000 mon.c (mon.1) 405 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.455235+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.455235+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.458465+0000 mon.c (mon.1) 406 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.458465+0000 mon.c (mon.1) 406 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.462932+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.462932+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.466002+0000 mon.c (mon.1) 407 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.466002+0000 mon.c (mon.1) 407 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.467377+0000 mon.c (mon.1) 408 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.467377+0000 mon.c (mon.1) 408 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.468618+0000 mon.c (mon.1) 409 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.468618+0000 mon.c (mon.1) 409 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.469874+0000 mon.c (mon.1) 410 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.469874+0000 mon.c (mon.1) 410 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.471007+0000 mon.c (mon.1) 411 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.471007+0000 mon.c (mon.1) 411 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.472142+0000 mon.c (mon.1) 412 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.472142+0000 mon.c (mon.1) 412 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.473970+0000 mon.c (mon.1) 413 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:51.653564+0000 mgr.y (mgr.44107) 380 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:51.653564+0000 mgr.y (mgr.44107) 380 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: cluster 2026-03-09T18:48:52.115426+0000 mgr.y (mgr.44107) 381 : cluster [DBG] pgmap v202: 161 pgs: 1 activating, 160 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 63 KiB/s rd, 127 B/s wr, 98 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: cluster 2026-03-09T18:48:52.115426+0000 mgr.y (mgr.44107) 381 : cluster [DBG] pgmap v202: 161 pgs: 1 activating, 160 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 63 KiB/s rd, 127 B/s wr, 98 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.335377+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.335377+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.341421+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.341421+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.348515+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.348515+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.353745+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.353745+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.356330+0000 mon.c (mon.1) 391 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.356330+0000 mon.c (mon.1) 391 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.357331+0000 mon.c (mon.1) 392 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.357331+0000 mon.c (mon.1) 392 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.361882+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.361882+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.403800+0000 mon.c (mon.1) 393 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.403800+0000 mon.c (mon.1) 393 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.405365+0000 mon.c (mon.1) 394 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.405365+0000 mon.c (mon.1) 394 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.406494+0000 mon.c (mon.1) 395 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.406494+0000 mon.c (mon.1) 395 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.407427+0000 mon.c (mon.1) 396 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.407427+0000 mon.c (mon.1) 396 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.408711+0000 mon.c (mon.1) 397 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.408711+0000 mon.c (mon.1) 397 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.410462+0000 mon.c (mon.1) 398 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.410462+0000 mon.c (mon.1) 398 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.411749+0000 mon.c (mon.1) 399 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.411749+0000 mon.c (mon.1) 399 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: cephadm 2026-03-09T18:48:52.412497+0000 mgr.y (mgr.44107) 382 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: cephadm 2026-03-09T18:48:52.412497+0000 mgr.y (mgr.44107) 382 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.417131+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.417131+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.419588+0000 mon.c (mon.1) 400 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm00.ygjynr"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.419588+0000 mon.c (mon.1) 400 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm00.ygjynr"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.419808+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm00.ygjynr"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.419808+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm00.ygjynr"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.423466+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm00.ygjynr"}]': finished 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.423466+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm00.ygjynr"}]': finished 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.425995+0000 mon.c (mon.1) 401 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm08.rcuedn"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.425995+0000 mon.c (mon.1) 401 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm08.rcuedn"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.426205+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm08.rcuedn"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.426205+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm08.rcuedn"}]: dispatch 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.429723+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm08.rcuedn"}]': finished 2026-03-09T18:48:53.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.429723+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm08.rcuedn"}]': finished 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.432646+0000 mon.c (mon.1) 402 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.432646+0000 mon.c (mon.1) 402 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: cephadm 2026-03-09T18:48:52.433474+0000 mgr.y (mgr.44107) 383 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: cephadm 2026-03-09T18:48:52.433474+0000 mgr.y (mgr.44107) 383 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.437480+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.437480+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.440275+0000 mon.c (mon.1) 403 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.440275+0000 mon.c (mon.1) 403 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.441413+0000 mon.c (mon.1) 404 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.441413+0000 mon.c (mon.1) 404 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: cephadm 2026-03-09T18:48:52.442136+0000 mgr.y (mgr.44107) 384 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: cephadm 2026-03-09T18:48:52.442136+0000 mgr.y (mgr.44107) 384 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.446501+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.446501+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.473970+0000 mon.c (mon.1) 413 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.474151+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.474151+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.477653+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.477653+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.480112+0000 mon.c (mon.1) 414 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.480112+0000 mon.c (mon.1) 414 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.480311+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.480311+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.483043+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.483043+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.485311+0000 mon.c (mon.1) 415 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.485311+0000 mon.c (mon.1) 415 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.485681+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.485681+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.488766+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.488766+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.491412+0000 mon.c (mon.1) 416 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.491412+0000 mon.c (mon.1) 416 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.491617+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.491617+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.494452+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.494452+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.496607+0000 mon.c (mon.1) 417 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.496607+0000 mon.c (mon.1) 417 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.496797+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.496797+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.499261+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.499261+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.501677+0000 mon.c (mon.1) 418 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.501677+0000 mon.c (mon.1) 418 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.501872+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.501872+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.504692+0000 mon.a (mon.0) 616 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.504692+0000 mon.a (mon.0) 616 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.507240+0000 mon.c (mon.1) 419 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.507240+0000 mon.c (mon.1) 419 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:48:53.632 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.507442+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.507442+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.510349+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.510349+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.512581+0000 mon.c (mon.1) 420 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.512581+0000 mon.c (mon.1) 420 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.512773+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.512773+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.513613+0000 mon.c (mon.1) 421 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.513613+0000 mon.c (mon.1) 421 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.513792+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.513792+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.516357+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.516357+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.518405+0000 mon.c (mon.1) 422 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.518405+0000 mon.c (mon.1) 422 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.518585+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.518585+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.519459+0000 mon.c (mon.1) 423 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.519459+0000 mon.c (mon.1) 423 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.519630+0000 mon.a (mon.0) 623 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.519630+0000 mon.a (mon.0) 623 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.522583+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.522583+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.524599+0000 mon.c (mon.1) 424 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.524599+0000 mon.c (mon.1) 424 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.524804+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.524804+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.527685+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.527685+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.530036+0000 mon.c (mon.1) 425 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.530036+0000 mon.c (mon.1) 425 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.530235+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.530235+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.531095+0000 mon.c (mon.1) 426 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.531095+0000 mon.c (mon.1) 426 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.531310+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.531310+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.532142+0000 mon.c (mon.1) 427 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.532142+0000 mon.c (mon.1) 427 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.532353+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.532353+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.533195+0000 mon.c (mon.1) 428 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.533195+0000 mon.c (mon.1) 428 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.533411+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.533411+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.534279+0000 mon.c (mon.1) 429 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.534279+0000 mon.c (mon.1) 429 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.534484+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.534484+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.535338+0000 mon.c (mon.1) 430 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.535338+0000 mon.c (mon.1) 430 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.535535+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.535535+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.536649+0000 mon.c (mon.1) 431 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.536649+0000 mon.c (mon.1) 431 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.536821+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.536821+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.543505+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.543505+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.544432+0000 mon.c (mon.1) 432 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.544432+0000 mon.c (mon.1) 432 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.545703+0000 mon.c (mon.1) 433 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.545703+0000 mon.c (mon.1) 433 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:53.633 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.546462+0000 mon.c (mon.1) 434 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.546462+0000 mon.c (mon.1) 434 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.706341+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.706341+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.748034+0000 mon.c (mon.1) 435 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.748034+0000 mon.c (mon.1) 435 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.749670+0000 mon.c (mon.1) 436 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.749670+0000 mon.c (mon.1) 436 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.750772+0000 mon.c (mon.1) 437 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.750772+0000 mon.c (mon.1) 437 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.756223+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:52.756223+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:53.327609+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:53 vm00 bash[65531]: audit 2026-03-09T18:48:53.327609+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.450125+0000 mon.c (mon.1) 405 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.450125+0000 mon.c (mon.1) 405 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.455235+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.455235+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.458465+0000 mon.c (mon.1) 406 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.458465+0000 mon.c (mon.1) 406 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.462932+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.462932+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.466002+0000 mon.c (mon.1) 407 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.466002+0000 mon.c (mon.1) 407 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.467377+0000 mon.c (mon.1) 408 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.467377+0000 mon.c (mon.1) 408 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.468618+0000 mon.c (mon.1) 409 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.468618+0000 mon.c (mon.1) 409 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.469874+0000 mon.c (mon.1) 410 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.469874+0000 mon.c (mon.1) 410 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.471007+0000 mon.c (mon.1) 411 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.471007+0000 mon.c (mon.1) 411 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.472142+0000 mon.c (mon.1) 412 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.472142+0000 mon.c (mon.1) 412 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.473970+0000 mon.c (mon.1) 413 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.473970+0000 mon.c (mon.1) 413 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.474151+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.474151+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.477653+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.477653+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.480112+0000 mon.c (mon.1) 414 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.480112+0000 mon.c (mon.1) 414 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.480311+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.480311+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.483043+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.483043+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:48:53.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.485311+0000 mon.c (mon.1) 415 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.485311+0000 mon.c (mon.1) 415 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.485681+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.485681+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.488766+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.488766+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.491412+0000 mon.c (mon.1) 416 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.491412+0000 mon.c (mon.1) 416 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.491617+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.491617+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.494452+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.494452+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.496607+0000 mon.c (mon.1) 417 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.496607+0000 mon.c (mon.1) 417 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.496797+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.496797+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.499261+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.499261+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.501677+0000 mon.c (mon.1) 418 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.501677+0000 mon.c (mon.1) 418 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.501872+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.501872+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.504692+0000 mon.a (mon.0) 616 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.504692+0000 mon.a (mon.0) 616 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.507240+0000 mon.c (mon.1) 419 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.507240+0000 mon.c (mon.1) 419 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.507442+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.507442+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.510349+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.510349+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.512581+0000 mon.c (mon.1) 420 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.512581+0000 mon.c (mon.1) 420 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.512773+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.512773+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.513613+0000 mon.c (mon.1) 421 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.513613+0000 mon.c (mon.1) 421 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.513792+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.513792+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.516357+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.516357+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.518405+0000 mon.c (mon.1) 422 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.518405+0000 mon.c (mon.1) 422 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.518585+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.518585+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.519459+0000 mon.c (mon.1) 423 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.519459+0000 mon.c (mon.1) 423 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.519630+0000 mon.a (mon.0) 623 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.519630+0000 mon.a (mon.0) 623 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.522583+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.522583+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.524599+0000 mon.c (mon.1) 424 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.524599+0000 mon.c (mon.1) 424 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.524804+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.524804+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.527685+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.527685+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.530036+0000 mon.c (mon.1) 425 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.530036+0000 mon.c (mon.1) 425 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.530235+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.530235+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.531095+0000 mon.c (mon.1) 426 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.531095+0000 mon.c (mon.1) 426 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.531310+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.531310+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.532142+0000 mon.c (mon.1) 427 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.532142+0000 mon.c (mon.1) 427 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.532353+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.532353+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.533195+0000 mon.c (mon.1) 428 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.533195+0000 mon.c (mon.1) 428 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.533411+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.533411+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.534279+0000 mon.c (mon.1) 429 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.534279+0000 mon.c (mon.1) 429 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.534484+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.534484+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.535338+0000 mon.c (mon.1) 430 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.535338+0000 mon.c (mon.1) 430 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.535535+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.535535+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.536649+0000 mon.c (mon.1) 431 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.536649+0000 mon.c (mon.1) 431 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.536821+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.536821+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.543505+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.543505+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.544432+0000 mon.c (mon.1) 432 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.544432+0000 mon.c (mon.1) 432 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.545703+0000 mon.c (mon.1) 433 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.545703+0000 mon.c (mon.1) 433 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.546462+0000 mon.c (mon.1) 434 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.546462+0000 mon.c (mon.1) 434 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.706341+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.706341+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.748034+0000 mon.c (mon.1) 435 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.748034+0000 mon.c (mon.1) 435 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.749670+0000 mon.c (mon.1) 436 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.749670+0000 mon.c (mon.1) 436 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.750772+0000 mon.c (mon.1) 437 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.750772+0000 mon.c (mon.1) 437 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.756223+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:52.756223+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:53.327609+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.636 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:53 vm00 bash[69512]: audit 2026-03-09T18:48:53.327609+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:51.653564+0000 mgr.y (mgr.44107) 380 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:51.653564+0000 mgr.y (mgr.44107) 380 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: cluster 2026-03-09T18:48:52.115426+0000 mgr.y (mgr.44107) 381 : cluster [DBG] pgmap v202: 161 pgs: 1 activating, 160 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 63 KiB/s rd, 127 B/s wr, 98 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: cluster 2026-03-09T18:48:52.115426+0000 mgr.y (mgr.44107) 381 : cluster [DBG] pgmap v202: 161 pgs: 1 activating, 160 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 63 KiB/s rd, 127 B/s wr, 98 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.335377+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.335377+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.341421+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.341421+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.348515+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.348515+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.353745+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.353745+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.356330+0000 mon.c (mon.1) 391 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.356330+0000 mon.c (mon.1) 391 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.357331+0000 mon.c (mon.1) 392 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.357331+0000 mon.c (mon.1) 392 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.361882+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.361882+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.403800+0000 mon.c (mon.1) 393 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.403800+0000 mon.c (mon.1) 393 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.405365+0000 mon.c (mon.1) 394 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.405365+0000 mon.c (mon.1) 394 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.406494+0000 mon.c (mon.1) 395 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.406494+0000 mon.c (mon.1) 395 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.407427+0000 mon.c (mon.1) 396 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.407427+0000 mon.c (mon.1) 396 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.408711+0000 mon.c (mon.1) 397 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.408711+0000 mon.c (mon.1) 397 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.410462+0000 mon.c (mon.1) 398 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.410462+0000 mon.c (mon.1) 398 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.411749+0000 mon.c (mon.1) 399 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.411749+0000 mon.c (mon.1) 399 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: cephadm 2026-03-09T18:48:52.412497+0000 mgr.y (mgr.44107) 382 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: cephadm 2026-03-09T18:48:52.412497+0000 mgr.y (mgr.44107) 382 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.417131+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.417131+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.419588+0000 mon.c (mon.1) 400 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm00.ygjynr"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.419588+0000 mon.c (mon.1) 400 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm00.ygjynr"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.419808+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm00.ygjynr"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.419808+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm00.ygjynr"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.423466+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm00.ygjynr"}]': finished 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.423466+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm00.ygjynr"}]': finished 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.425995+0000 mon.c (mon.1) 401 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm08.rcuedn"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.425995+0000 mon.c (mon.1) 401 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm08.rcuedn"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.426205+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm08.rcuedn"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.426205+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm08.rcuedn"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.429723+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm08.rcuedn"}]': finished 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.429723+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm08.rcuedn"}]': finished 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.432646+0000 mon.c (mon.1) 402 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.432646+0000 mon.c (mon.1) 402 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: cephadm 2026-03-09T18:48:52.433474+0000 mgr.y (mgr.44107) 383 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: cephadm 2026-03-09T18:48:52.433474+0000 mgr.y (mgr.44107) 383 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.437480+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.437480+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.440275+0000 mon.c (mon.1) 403 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.726 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.440275+0000 mon.c (mon.1) 403 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.441413+0000 mon.c (mon.1) 404 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.441413+0000 mon.c (mon.1) 404 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: cephadm 2026-03-09T18:48:52.442136+0000 mgr.y (mgr.44107) 384 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: cephadm 2026-03-09T18:48:52.442136+0000 mgr.y (mgr.44107) 384 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.446501+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.446501+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.450125+0000 mon.c (mon.1) 405 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.450125+0000 mon.c (mon.1) 405 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.455235+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.455235+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.458465+0000 mon.c (mon.1) 406 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.458465+0000 mon.c (mon.1) 406 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.462932+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.462932+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.466002+0000 mon.c (mon.1) 407 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.466002+0000 mon.c (mon.1) 407 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.467377+0000 mon.c (mon.1) 408 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.467377+0000 mon.c (mon.1) 408 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.468618+0000 mon.c (mon.1) 409 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.468618+0000 mon.c (mon.1) 409 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.469874+0000 mon.c (mon.1) 410 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.469874+0000 mon.c (mon.1) 410 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.471007+0000 mon.c (mon.1) 411 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.471007+0000 mon.c (mon.1) 411 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.472142+0000 mon.c (mon.1) 412 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.472142+0000 mon.c (mon.1) 412 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.473970+0000 mon.c (mon.1) 413 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.473970+0000 mon.c (mon.1) 413 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.474151+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.474151+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.477653+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.477653+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.480112+0000 mon.c (mon.1) 414 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.480112+0000 mon.c (mon.1) 414 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.480311+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.480311+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.483043+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.483043+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.485311+0000 mon.c (mon.1) 415 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.485311+0000 mon.c (mon.1) 415 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.485681+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.485681+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.488766+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.488766+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.491412+0000 mon.c (mon.1) 416 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.491412+0000 mon.c (mon.1) 416 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.491617+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.491617+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.494452+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.494452+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.496607+0000 mon.c (mon.1) 417 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.496607+0000 mon.c (mon.1) 417 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.496797+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.496797+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.499261+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.499261+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.501677+0000 mon.c (mon.1) 418 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.501677+0000 mon.c (mon.1) 418 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.501872+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.501872+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.504692+0000 mon.a (mon.0) 616 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.504692+0000 mon.a (mon.0) 616 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.507240+0000 mon.c (mon.1) 419 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.507240+0000 mon.c (mon.1) 419 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.507442+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:48:53.727 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.507442+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.510349+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.510349+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.512581+0000 mon.c (mon.1) 420 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.512581+0000 mon.c (mon.1) 420 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.512773+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.512773+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.513613+0000 mon.c (mon.1) 421 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.513613+0000 mon.c (mon.1) 421 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.513792+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.513792+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.516357+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.516357+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.518405+0000 mon.c (mon.1) 422 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.518405+0000 mon.c (mon.1) 422 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.518585+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.518585+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.519459+0000 mon.c (mon.1) 423 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.519459+0000 mon.c (mon.1) 423 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.519630+0000 mon.a (mon.0) 623 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.519630+0000 mon.a (mon.0) 623 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.522583+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.522583+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.524599+0000 mon.c (mon.1) 424 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.524599+0000 mon.c (mon.1) 424 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.524804+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.524804+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.527685+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.527685+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.530036+0000 mon.c (mon.1) 425 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.530036+0000 mon.c (mon.1) 425 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.530235+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.530235+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.531095+0000 mon.c (mon.1) 426 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.531095+0000 mon.c (mon.1) 426 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.531310+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.531310+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.532142+0000 mon.c (mon.1) 427 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.532142+0000 mon.c (mon.1) 427 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.532353+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.532353+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.533195+0000 mon.c (mon.1) 428 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.533195+0000 mon.c (mon.1) 428 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.533411+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.533411+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.534279+0000 mon.c (mon.1) 429 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.534279+0000 mon.c (mon.1) 429 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.534484+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.534484+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.535338+0000 mon.c (mon.1) 430 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.535338+0000 mon.c (mon.1) 430 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.535535+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.535535+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.536649+0000 mon.c (mon.1) 431 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.536649+0000 mon.c (mon.1) 431 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.536821+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.536821+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.543505+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.543505+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.544432+0000 mon.c (mon.1) 432 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.544432+0000 mon.c (mon.1) 432 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.545703+0000 mon.c (mon.1) 433 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.545703+0000 mon.c (mon.1) 433 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.546462+0000 mon.c (mon.1) 434 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.546462+0000 mon.c (mon.1) 434 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.706341+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.706341+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.748034+0000 mon.c (mon.1) 435 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:53.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.748034+0000 mon.c (mon.1) 435 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:48:53.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.749670+0000 mon.c (mon.1) 436 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:53.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.749670+0000 mon.c (mon.1) 436 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:48:53.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.750772+0000 mon.c (mon.1) 437 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:53.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.750772+0000 mon.c (mon.1) 437 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:48:53.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.756223+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:52.756223+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:53.327609+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:53.729 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:53 vm08 bash[46122]: audit 2026-03-09T18:48:53.327609+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:48:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:54 vm00 bash[69512]: cephadm 2026-03-09T18:48:52.450946+0000 mgr.y (mgr.44107) 385 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:48:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:54 vm00 bash[69512]: cephadm 2026-03-09T18:48:52.450946+0000 mgr.y (mgr.44107) 385 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:48:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:54 vm00 bash[69512]: cephadm 2026-03-09T18:48:52.459272+0000 mgr.y (mgr.44107) 386 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:48:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:54 vm00 bash[69512]: cephadm 2026-03-09T18:48:52.459272+0000 mgr.y (mgr.44107) 386 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:48:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:54 vm00 bash[69512]: cephadm 2026-03-09T18:48:52.472889+0000 mgr.y (mgr.44107) 387 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:48:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:54 vm00 bash[69512]: cephadm 2026-03-09T18:48:52.472889+0000 mgr.y (mgr.44107) 387 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:48:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:54 vm00 bash[69512]: cephadm 2026-03-09T18:48:52.536234+0000 mgr.y (mgr.44107) 388 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:48:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:54 vm00 bash[69512]: cephadm 2026-03-09T18:48:52.536234+0000 mgr.y (mgr.44107) 388 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:48:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:54 vm00 bash[69512]: cephadm 2026-03-09T18:48:52.549540+0000 mgr.y (mgr.44107) 389 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T18:48:54.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:54 vm00 bash[69512]: cephadm 2026-03-09T18:48:52.549540+0000 mgr.y (mgr.44107) 389 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T18:48:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:54 vm00 bash[65531]: cephadm 2026-03-09T18:48:52.450946+0000 mgr.y (mgr.44107) 385 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:48:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:54 vm00 bash[65531]: cephadm 2026-03-09T18:48:52.450946+0000 mgr.y (mgr.44107) 385 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:48:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:54 vm00 bash[65531]: cephadm 2026-03-09T18:48:52.459272+0000 mgr.y (mgr.44107) 386 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:48:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:54 vm00 bash[65531]: cephadm 2026-03-09T18:48:52.459272+0000 mgr.y (mgr.44107) 386 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:48:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:54 vm00 bash[65531]: cephadm 2026-03-09T18:48:52.472889+0000 mgr.y (mgr.44107) 387 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:48:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:54 vm00 bash[65531]: cephadm 2026-03-09T18:48:52.472889+0000 mgr.y (mgr.44107) 387 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:48:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:54 vm00 bash[65531]: cephadm 2026-03-09T18:48:52.536234+0000 mgr.y (mgr.44107) 388 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:48:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:54 vm00 bash[65531]: cephadm 2026-03-09T18:48:52.536234+0000 mgr.y (mgr.44107) 388 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:48:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:54 vm00 bash[65531]: cephadm 2026-03-09T18:48:52.549540+0000 mgr.y (mgr.44107) 389 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T18:48:54.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:54 vm00 bash[65531]: cephadm 2026-03-09T18:48:52.549540+0000 mgr.y (mgr.44107) 389 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T18:48:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:54 vm08 bash[46122]: cephadm 2026-03-09T18:48:52.450946+0000 mgr.y (mgr.44107) 385 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:48:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:54 vm08 bash[46122]: cephadm 2026-03-09T18:48:52.450946+0000 mgr.y (mgr.44107) 385 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:48:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:54 vm08 bash[46122]: cephadm 2026-03-09T18:48:52.459272+0000 mgr.y (mgr.44107) 386 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:48:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:54 vm08 bash[46122]: cephadm 2026-03-09T18:48:52.459272+0000 mgr.y (mgr.44107) 386 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:48:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:54 vm08 bash[46122]: cephadm 2026-03-09T18:48:52.472889+0000 mgr.y (mgr.44107) 387 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:48:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:54 vm08 bash[46122]: cephadm 2026-03-09T18:48:52.472889+0000 mgr.y (mgr.44107) 387 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:48:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:54 vm08 bash[46122]: cephadm 2026-03-09T18:48:52.536234+0000 mgr.y (mgr.44107) 388 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:48:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:54 vm08 bash[46122]: cephadm 2026-03-09T18:48:52.536234+0000 mgr.y (mgr.44107) 388 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:48:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:54 vm08 bash[46122]: cephadm 2026-03-09T18:48:52.549540+0000 mgr.y (mgr.44107) 389 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T18:48:54.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:54 vm08 bash[46122]: cephadm 2026-03-09T18:48:52.549540+0000 mgr.y (mgr.44107) 389 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T18:48:55.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:55 vm00 bash[69512]: cluster 2026-03-09T18:48:54.115895+0000 mgr.y (mgr.44107) 390 : cluster [DBG] pgmap v203: 161 pgs: 1 activating, 160 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 7.9 KiB/s rd, 0 B/s wr, 12 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:48:55.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:55 vm00 bash[69512]: cluster 2026-03-09T18:48:54.115895+0000 mgr.y (mgr.44107) 390 : cluster [DBG] pgmap v203: 161 pgs: 1 activating, 160 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 7.9 KiB/s rd, 0 B/s wr, 12 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:48:55.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:55 vm00 bash[65531]: cluster 2026-03-09T18:48:54.115895+0000 mgr.y (mgr.44107) 390 : cluster [DBG] pgmap v203: 161 pgs: 1 activating, 160 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 7.9 KiB/s rd, 0 B/s wr, 12 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:48:55.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:55 vm00 bash[65531]: cluster 2026-03-09T18:48:54.115895+0000 mgr.y (mgr.44107) 390 : cluster [DBG] pgmap v203: 161 pgs: 1 activating, 160 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 7.9 KiB/s rd, 0 B/s wr, 12 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:48:55.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:55 vm08 bash[46122]: cluster 2026-03-09T18:48:54.115895+0000 mgr.y (mgr.44107) 390 : cluster [DBG] pgmap v203: 161 pgs: 1 activating, 160 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 7.9 KiB/s rd, 0 B/s wr, 12 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:48:55.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:55 vm08 bash[46122]: cluster 2026-03-09T18:48:54.115895+0000 mgr.y (mgr.44107) 390 : cluster [DBG] pgmap v203: 161 pgs: 1 activating, 160 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 7.9 KiB/s rd, 0 B/s wr, 12 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:48:56.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:56 vm08 bash[46122]: cluster 2026-03-09T18:48:56.347105+0000 mon.a (mon.0) 638 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive) 2026-03-09T18:48:56.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:56 vm08 bash[46122]: cluster 2026-03-09T18:48:56.347105+0000 mon.a (mon.0) 638 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive) 2026-03-09T18:48:56.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:56 vm08 bash[46122]: cluster 2026-03-09T18:48:56.347122+0000 mon.a (mon.0) 639 : cluster [INF] Cluster is now healthy 2026-03-09T18:48:56.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:56 vm08 bash[46122]: cluster 2026-03-09T18:48:56.347122+0000 mon.a (mon.0) 639 : cluster [INF] Cluster is now healthy 2026-03-09T18:48:56.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:56 vm00 bash[69512]: cluster 2026-03-09T18:48:56.347105+0000 mon.a (mon.0) 638 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive) 2026-03-09T18:48:56.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:56 vm00 bash[69512]: cluster 2026-03-09T18:48:56.347105+0000 mon.a (mon.0) 638 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive) 2026-03-09T18:48:56.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:56 vm00 bash[69512]: cluster 2026-03-09T18:48:56.347122+0000 mon.a (mon.0) 639 : cluster [INF] Cluster is now healthy 2026-03-09T18:48:56.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:56 vm00 bash[69512]: cluster 2026-03-09T18:48:56.347122+0000 mon.a (mon.0) 639 : cluster [INF] Cluster is now healthy 2026-03-09T18:48:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:56 vm00 bash[65531]: cluster 2026-03-09T18:48:56.347105+0000 mon.a (mon.0) 638 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive) 2026-03-09T18:48:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:56 vm00 bash[65531]: cluster 2026-03-09T18:48:56.347105+0000 mon.a (mon.0) 638 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive) 2026-03-09T18:48:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:56 vm00 bash[65531]: cluster 2026-03-09T18:48:56.347122+0000 mon.a (mon.0) 639 : cluster [INF] Cluster is now healthy 2026-03-09T18:48:56.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:56 vm00 bash[65531]: cluster 2026-03-09T18:48:56.347122+0000 mon.a (mon.0) 639 : cluster [INF] Cluster is now healthy 2026-03-09T18:48:57.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:57 vm08 bash[46122]: cluster 2026-03-09T18:48:56.116353+0000 mgr.y (mgr.44107) 391 : cluster [DBG] pgmap v204: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.1 KiB/s rd, 2 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:48:57.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:57 vm08 bash[46122]: cluster 2026-03-09T18:48:56.116353+0000 mgr.y (mgr.44107) 391 : cluster [DBG] pgmap v204: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.1 KiB/s rd, 2 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:48:57.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:57 vm00 bash[69512]: cluster 2026-03-09T18:48:56.116353+0000 mgr.y (mgr.44107) 391 : cluster [DBG] pgmap v204: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.1 KiB/s rd, 2 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:48:57.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:57 vm00 bash[69512]: cluster 2026-03-09T18:48:56.116353+0000 mgr.y (mgr.44107) 391 : cluster [DBG] pgmap v204: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.1 KiB/s rd, 2 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:48:57.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:57 vm00 bash[65531]: cluster 2026-03-09T18:48:56.116353+0000 mgr.y (mgr.44107) 391 : cluster [DBG] pgmap v204: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.1 KiB/s rd, 2 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:48:57.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:57 vm00 bash[65531]: cluster 2026-03-09T18:48:56.116353+0000 mgr.y (mgr.44107) 391 : cluster [DBG] pgmap v204: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.1 KiB/s rd, 2 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:48:59.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:59 vm08 bash[46122]: cluster 2026-03-09T18:48:58.116731+0000 mgr.y (mgr.44107) 392 : cluster [DBG] pgmap v205: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 3.6 KiB/s rd, 3 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:48:59.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:48:59 vm08 bash[46122]: cluster 2026-03-09T18:48:58.116731+0000 mgr.y (mgr.44107) 392 : cluster [DBG] pgmap v205: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 3.6 KiB/s rd, 3 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:48:59.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:59 vm00 bash[69512]: cluster 2026-03-09T18:48:58.116731+0000 mgr.y (mgr.44107) 392 : cluster [DBG] pgmap v205: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 3.6 KiB/s rd, 3 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:48:59.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:48:59 vm00 bash[69512]: cluster 2026-03-09T18:48:58.116731+0000 mgr.y (mgr.44107) 392 : cluster [DBG] pgmap v205: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 3.6 KiB/s rd, 3 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:48:59.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:48:59 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:48:59] "GET /metrics HTTP/1.1" 200 37991 "" "Prometheus/2.51.0" 2026-03-09T18:48:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:59 vm00 bash[65531]: cluster 2026-03-09T18:48:58.116731+0000 mgr.y (mgr.44107) 392 : cluster [DBG] pgmap v205: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 3.6 KiB/s rd, 3 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:48:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:48:59 vm00 bash[65531]: cluster 2026-03-09T18:48:58.116731+0000 mgr.y (mgr.44107) 392 : cluster [DBG] pgmap v205: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 3.6 KiB/s rd, 3 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:49:01.659 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:01 vm00 bash[69512]: cluster 2026-03-09T18:49:00.117042+0000 mgr.y (mgr.44107) 393 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 3 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:49:01.659 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:01 vm00 bash[69512]: cluster 2026-03-09T18:49:00.117042+0000 mgr.y (mgr.44107) 393 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 3 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:49:01.659 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:01 vm00 bash[65531]: cluster 2026-03-09T18:49:00.117042+0000 mgr.y (mgr.44107) 393 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 3 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:49:01.659 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:01 vm00 bash[65531]: cluster 2026-03-09T18:49:00.117042+0000 mgr.y (mgr.44107) 393 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 3 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:49:01.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:01 vm08 bash[46122]: cluster 2026-03-09T18:49:00.117042+0000 mgr.y (mgr.44107) 393 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 3 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:49:01.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:01 vm08 bash[46122]: cluster 2026-03-09T18:49:00.117042+0000 mgr.y (mgr.44107) 393 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 3 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:49:03.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:03 vm08 bash[46122]: audit 2026-03-09T18:49:01.663233+0000 mgr.y (mgr.44107) 394 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:03 vm08 bash[46122]: audit 2026-03-09T18:49:01.663233+0000 mgr.y (mgr.44107) 394 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:03 vm08 bash[46122]: cluster 2026-03-09T18:49:02.117459+0000 mgr.y (mgr.44107) 395 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:49:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:03 vm08 bash[46122]: cluster 2026-03-09T18:49:02.117459+0000 mgr.y (mgr.44107) 395 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:49:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:03 vm08 bash[46122]: audit 2026-03-09T18:49:03.102960+0000 mon.c (mon.1) 438 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:49:03.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:03 vm08 bash[46122]: audit 2026-03-09T18:49:03.102960+0000 mon.c (mon.1) 438 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:49:03.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:03 vm00 bash[69512]: audit 2026-03-09T18:49:01.663233+0000 mgr.y (mgr.44107) 394 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:03.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:03 vm00 bash[69512]: audit 2026-03-09T18:49:01.663233+0000 mgr.y (mgr.44107) 394 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:03.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:03 vm00 bash[69512]: cluster 2026-03-09T18:49:02.117459+0000 mgr.y (mgr.44107) 395 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:49:03.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:03 vm00 bash[69512]: cluster 2026-03-09T18:49:02.117459+0000 mgr.y (mgr.44107) 395 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:49:03.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:03 vm00 bash[69512]: audit 2026-03-09T18:49:03.102960+0000 mon.c (mon.1) 438 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:49:03.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:03 vm00 bash[69512]: audit 2026-03-09T18:49:03.102960+0000 mon.c (mon.1) 438 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:49:03.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:03 vm00 bash[65531]: audit 2026-03-09T18:49:01.663233+0000 mgr.y (mgr.44107) 394 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:03.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:03 vm00 bash[65531]: audit 2026-03-09T18:49:01.663233+0000 mgr.y (mgr.44107) 394 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:03.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:03 vm00 bash[65531]: cluster 2026-03-09T18:49:02.117459+0000 mgr.y (mgr.44107) 395 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:49:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:03 vm00 bash[65531]: cluster 2026-03-09T18:49:02.117459+0000 mgr.y (mgr.44107) 395 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T18:49:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:03 vm00 bash[65531]: audit 2026-03-09T18:49:03.102960+0000 mon.c (mon.1) 438 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:49:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:03 vm00 bash[65531]: audit 2026-03-09T18:49:03.102960+0000 mon.c (mon.1) 438 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:49:05.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:05 vm08 bash[46122]: cluster 2026-03-09T18:49:04.117781+0000 mgr.y (mgr.44107) 396 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-09T18:49:05.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:05 vm08 bash[46122]: cluster 2026-03-09T18:49:04.117781+0000 mgr.y (mgr.44107) 396 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-09T18:49:05.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:05 vm00 bash[69512]: cluster 2026-03-09T18:49:04.117781+0000 mgr.y (mgr.44107) 396 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-09T18:49:05.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:05 vm00 bash[69512]: cluster 2026-03-09T18:49:04.117781+0000 mgr.y (mgr.44107) 396 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-09T18:49:05.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:05 vm00 bash[65531]: cluster 2026-03-09T18:49:04.117781+0000 mgr.y (mgr.44107) 396 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-09T18:49:05.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:05 vm00 bash[65531]: cluster 2026-03-09T18:49:04.117781+0000 mgr.y (mgr.44107) 396 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-09T18:49:07.400 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-09T18:49:07.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:07 vm00 bash[69512]: cluster 2026-03-09T18:49:06.118229+0000 mgr.y (mgr.44107) 397 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-09T18:49:07.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:07 vm00 bash[69512]: cluster 2026-03-09T18:49:06.118229+0000 mgr.y (mgr.44107) 397 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-09T18:49:07.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:07 vm00 bash[65531]: cluster 2026-03-09T18:49:06.118229+0000 mgr.y (mgr.44107) 397 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-09T18:49:07.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:07 vm00 bash[65531]: cluster 2026-03-09T18:49:06.118229+0000 mgr.y (mgr.44107) 397 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-09T18:49:07.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:07 vm08 bash[46122]: cluster 2026-03-09T18:49:06.118229+0000 mgr.y (mgr.44107) 397 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-09T18:49:07.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:07 vm08 bash[46122]: cluster 2026-03-09T18:49:06.118229+0000 mgr.y (mgr.44107) 397 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-09T18:49:07.821 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:49:07.821 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (19m) 21s ago 26m 14.6M - 0.25.0 c8568f914cd2 2a8a29ecee54 2026-03-09T18:49:07.821 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (6m) 21s ago 26m 66.7M - 10.4.0 c8b91775d855 5e0e30d27ab2 2026-03-09T18:49:07.821 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (7m) 21s ago 25m 44.6M - 3.5 e1d6a67b021e ff3da66cebe9 2026-03-09T18:49:07.821 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443,9283,8765 running (7m) 21s ago 29m 466M - 19.2.3-678-ge911bdeb 654f31e6858e e51f08afe84e 2026-03-09T18:49:07.821 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:8443,9283,8765 running (16m) 21s ago 29m 537M - 19.2.3-678-ge911bdeb 654f31e6858e 838266ced7b2 2026-03-09T18:49:07.821 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (5m) 21s ago 29m 56.0M 2048M 19.2.3-678-ge911bdeb 654f31e6858e eb9fca83668a 2026-03-09T18:49:07.821 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (6m) 21s ago 29m 51.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a343d2673f4 2026-03-09T18:49:07.821 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (5m) 21s ago 29m 52.8M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 5c50cbcbef6b 2026-03-09T18:49:07.821 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (19m) 21s ago 26m 7808k - 1.7.0 72c9c2088986 c2e3e3202fde 2026-03-09T18:49:07.821 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (19m) 21s ago 26m 8267k - 1.7.0 72c9c2088986 7a7d1ed8c801 2026-03-09T18:49:07.821 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (3m) 21s ago 28m 53.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1334681baf1a 2026-03-09T18:49:07.821 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (2m) 21s ago 28m 52.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e b0cddb861a9d 2026-03-09T18:49:07.821 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (4m) 21s ago 28m 51.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9a838e294e64 2026-03-09T18:49:07.821 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (4m) 21s ago 28m 76.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 161fbb574888 2026-03-09T18:49:07.821 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (2m) 21s ago 27m 56.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7575a2bf51cd 2026-03-09T18:49:07.821 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (109s) 21s ago 27m 71.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9263a2afad40 2026-03-09T18:49:07.821 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (94s) 21s ago 27m 48.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e b5db37a03fe5 2026-03-09T18:49:07.821 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (79s) 21s ago 26m 69.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9904fad47d23 2026-03-09T18:49:07.821 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (7m) 21s ago 26m 47.5M - 2.51.0 1d3b7f56885b 2dc1789f7bf0 2026-03-09T18:49:07.821 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (28s) 21s ago 25m 90.9M - 19.2.3-678-ge911bdeb 654f31e6858e c812b26432aa 2026-03-09T18:49:07.821 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (27s) 21s ago 25m 90.9M - 19.2.3-678-ge911bdeb 654f31e6858e a1f2a8ce96e5 2026-03-09T18:49:07.865 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.rgw | length == 1'"'"'' 2026-03-09T18:49:08.312 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:49:08.348 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.rgw | keys'"'"' | grep $sha1' 2026-03-09T18:49:08.566 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:08 vm00 bash[65531]: audit 2026-03-09T18:49:07.340932+0000 mgr.y (mgr.44107) 398 : audit [DBG] from='client.54606 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:08.566 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:08 vm00 bash[65531]: audit 2026-03-09T18:49:07.340932+0000 mgr.y (mgr.44107) 398 : audit [DBG] from='client.54606 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:08.566 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:08 vm00 bash[65531]: audit 2026-03-09T18:49:08.303474+0000 mon.b (mon.2) 25 : audit [DBG] from='client.? 192.168.123.100:0/1151681505' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:08.566 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:08 vm00 bash[65531]: audit 2026-03-09T18:49:08.303474+0000 mon.b (mon.2) 25 : audit [DBG] from='client.? 192.168.123.100:0/1151681505' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:08.566 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:08 vm00 bash[69512]: audit 2026-03-09T18:49:07.340932+0000 mgr.y (mgr.44107) 398 : audit [DBG] from='client.54606 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:08.566 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:08 vm00 bash[69512]: audit 2026-03-09T18:49:07.340932+0000 mgr.y (mgr.44107) 398 : audit [DBG] from='client.54606 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:08.566 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:08 vm00 bash[69512]: audit 2026-03-09T18:49:08.303474+0000 mon.b (mon.2) 25 : audit [DBG] from='client.? 192.168.123.100:0/1151681505' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:08.566 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:08 vm00 bash[69512]: audit 2026-03-09T18:49:08.303474+0000 mon.b (mon.2) 25 : audit [DBG] from='client.? 192.168.123.100:0/1151681505' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:08.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:08 vm08 bash[46122]: audit 2026-03-09T18:49:07.340932+0000 mgr.y (mgr.44107) 398 : audit [DBG] from='client.54606 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:08.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:08 vm08 bash[46122]: audit 2026-03-09T18:49:07.340932+0000 mgr.y (mgr.44107) 398 : audit [DBG] from='client.54606 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:08.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:08 vm08 bash[46122]: audit 2026-03-09T18:49:08.303474+0000 mon.b (mon.2) 25 : audit [DBG] from='client.? 192.168.123.100:0/1151681505' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:08.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:08 vm08 bash[46122]: audit 2026-03-09T18:49:08.303474+0000 mon.b (mon.2) 25 : audit [DBG] from='client.? 192.168.123.100:0/1151681505' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:08.793 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)" 2026-03-09T18:49:08.827 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade status' 2026-03-09T18:49:09.234 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:49:09.234 INFO:teuthology.orchestra.run.vm00.stdout: "target_image": null, 2026-03-09T18:49:09.234 INFO:teuthology.orchestra.run.vm00.stdout: "in_progress": false, 2026-03-09T18:49:09.234 INFO:teuthology.orchestra.run.vm00.stdout: "which": "", 2026-03-09T18:49:09.234 INFO:teuthology.orchestra.run.vm00.stdout: "services_complete": [], 2026-03-09T18:49:09.234 INFO:teuthology.orchestra.run.vm00.stdout: "progress": null, 2026-03-09T18:49:09.234 INFO:teuthology.orchestra.run.vm00.stdout: "message": "", 2026-03-09T18:49:09.234 INFO:teuthology.orchestra.run.vm00.stdout: "is_paused": false 2026-03-09T18:49:09.234 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:49:09.278 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph health detail' 2026-03-09T18:49:09.501 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:09 vm00 bash[65531]: audit 2026-03-09T18:49:07.820946+0000 mgr.y (mgr.44107) 399 : audit [DBG] from='client.34517 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:09.501 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:09 vm00 bash[65531]: audit 2026-03-09T18:49:07.820946+0000 mgr.y (mgr.44107) 399 : audit [DBG] from='client.34517 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:09.501 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:09 vm00 bash[65531]: cluster 2026-03-09T18:49:08.118677+0000 mgr.y (mgr.44107) 400 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.1 KiB/s rd, 2 op/s 2026-03-09T18:49:09.501 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:09 vm00 bash[65531]: cluster 2026-03-09T18:49:08.118677+0000 mgr.y (mgr.44107) 400 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.1 KiB/s rd, 2 op/s 2026-03-09T18:49:09.501 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:09 vm00 bash[65531]: audit 2026-03-09T18:49:08.787096+0000 mon.c (mon.1) 439 : audit [DBG] from='client.? 192.168.123.100:0/3180915438' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:09.501 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:09 vm00 bash[65531]: audit 2026-03-09T18:49:08.787096+0000 mon.c (mon.1) 439 : audit [DBG] from='client.? 192.168.123.100:0/3180915438' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:09.501 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:09 vm00 bash[69512]: audit 2026-03-09T18:49:07.820946+0000 mgr.y (mgr.44107) 399 : audit [DBG] from='client.34517 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:09.501 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:09 vm00 bash[69512]: audit 2026-03-09T18:49:07.820946+0000 mgr.y (mgr.44107) 399 : audit [DBG] from='client.34517 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:09.501 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:09 vm00 bash[69512]: cluster 2026-03-09T18:49:08.118677+0000 mgr.y (mgr.44107) 400 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.1 KiB/s rd, 2 op/s 2026-03-09T18:49:09.501 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:09 vm00 bash[69512]: cluster 2026-03-09T18:49:08.118677+0000 mgr.y (mgr.44107) 400 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.1 KiB/s rd, 2 op/s 2026-03-09T18:49:09.501 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:09 vm00 bash[69512]: audit 2026-03-09T18:49:08.787096+0000 mon.c (mon.1) 439 : audit [DBG] from='client.? 192.168.123.100:0/3180915438' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:09.501 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:09 vm00 bash[69512]: audit 2026-03-09T18:49:08.787096+0000 mon.c (mon.1) 439 : audit [DBG] from='client.? 192.168.123.100:0/3180915438' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:09.629 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:49:09 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:49:09] "GET /metrics HTTP/1.1" 200 37993 "" "Prometheus/2.51.0" 2026-03-09T18:49:09.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:09 vm08 bash[46122]: audit 2026-03-09T18:49:07.820946+0000 mgr.y (mgr.44107) 399 : audit [DBG] from='client.34517 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:09.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:09 vm08 bash[46122]: audit 2026-03-09T18:49:07.820946+0000 mgr.y (mgr.44107) 399 : audit [DBG] from='client.34517 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:09.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:09 vm08 bash[46122]: cluster 2026-03-09T18:49:08.118677+0000 mgr.y (mgr.44107) 400 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.1 KiB/s rd, 2 op/s 2026-03-09T18:49:09.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:09 vm08 bash[46122]: cluster 2026-03-09T18:49:08.118677+0000 mgr.y (mgr.44107) 400 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 2.1 KiB/s rd, 2 op/s 2026-03-09T18:49:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:09 vm08 bash[46122]: audit 2026-03-09T18:49:08.787096+0000 mon.c (mon.1) 439 : audit [DBG] from='client.? 192.168.123.100:0/3180915438' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:09.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:09 vm08 bash[46122]: audit 2026-03-09T18:49:08.787096+0000 mon.c (mon.1) 439 : audit [DBG] from='client.? 192.168.123.100:0/3180915438' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:09.758 INFO:teuthology.orchestra.run.vm00.stdout:HEALTH_OK 2026-03-09T18:49:09.815 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1' 2026-03-09T18:49:10.253 INFO:teuthology.orchestra.run.vm00.stdout:Initiating upgrade to quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:49:10.312 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-09T18:49:10.314 INFO:tasks.cephadm:Running commands on role mon.a host ubuntu@vm00.local 2026-03-09T18:49:10.314 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'while ceph orch upgrade status | jq '"'"'.in_progress'"'"' | grep true && ! ceph orch upgrade status | jq '"'"'.message'"'"' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; ceph health detail ; sleep 30 ; done' 2026-03-09T18:49:10.594 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:10 vm00 bash[65531]: audit 2026-03-09T18:49:09.237614+0000 mgr.y (mgr.44107) 401 : audit [DBG] from='client.44647 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:10.594 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:10 vm00 bash[65531]: audit 2026-03-09T18:49:09.237614+0000 mgr.y (mgr.44107) 401 : audit [DBG] from='client.44647 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:10.594 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:10 vm00 bash[65531]: audit 2026-03-09T18:49:09.760935+0000 mon.c (mon.1) 440 : audit [DBG] from='client.? 192.168.123.100:0/1561760169' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:49:10.594 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:10 vm00 bash[65531]: audit 2026-03-09T18:49:09.760935+0000 mon.c (mon.1) 440 : audit [DBG] from='client.? 192.168.123.100:0/1561760169' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:49:10.594 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:10 vm00 bash[65531]: audit 2026-03-09T18:49:10.253941+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:10.594 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:10 vm00 bash[65531]: audit 2026-03-09T18:49:10.253941+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:10.594 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:10 vm00 bash[65531]: audit 2026-03-09T18:49:10.257770+0000 mon.c (mon.1) 441 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:10.594 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:10 vm00 bash[65531]: audit 2026-03-09T18:49:10.257770+0000 mon.c (mon.1) 441 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:10.595 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:10 vm00 bash[65531]: audit 2026-03-09T18:49:10.259234+0000 mon.c (mon.1) 442 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:10.595 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:10 vm00 bash[65531]: audit 2026-03-09T18:49:10.259234+0000 mon.c (mon.1) 442 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:10.595 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:10 vm00 bash[65531]: audit 2026-03-09T18:49:10.260151+0000 mon.c (mon.1) 443 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:49:10.595 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:10 vm00 bash[65531]: audit 2026-03-09T18:49:10.260151+0000 mon.c (mon.1) 443 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:49:10.595 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:10 vm00 bash[65531]: audit 2026-03-09T18:49:10.265199+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:10.595 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:10 vm00 bash[65531]: audit 2026-03-09T18:49:10.265199+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:10.595 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:10 vm00 bash[69512]: audit 2026-03-09T18:49:09.237614+0000 mgr.y (mgr.44107) 401 : audit [DBG] from='client.44647 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:10.595 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:10 vm00 bash[69512]: audit 2026-03-09T18:49:09.237614+0000 mgr.y (mgr.44107) 401 : audit [DBG] from='client.44647 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:10.595 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:10 vm00 bash[69512]: audit 2026-03-09T18:49:09.760935+0000 mon.c (mon.1) 440 : audit [DBG] from='client.? 192.168.123.100:0/1561760169' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:49:10.595 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:10 vm00 bash[69512]: audit 2026-03-09T18:49:09.760935+0000 mon.c (mon.1) 440 : audit [DBG] from='client.? 192.168.123.100:0/1561760169' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:49:10.595 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:10 vm00 bash[69512]: audit 2026-03-09T18:49:10.253941+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:10.595 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:10 vm00 bash[69512]: audit 2026-03-09T18:49:10.253941+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:10.595 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:10 vm00 bash[69512]: audit 2026-03-09T18:49:10.257770+0000 mon.c (mon.1) 441 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:10.595 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:10 vm00 bash[69512]: audit 2026-03-09T18:49:10.257770+0000 mon.c (mon.1) 441 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:10.595 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:10 vm00 bash[69512]: audit 2026-03-09T18:49:10.259234+0000 mon.c (mon.1) 442 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:10.595 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:10 vm00 bash[69512]: audit 2026-03-09T18:49:10.259234+0000 mon.c (mon.1) 442 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:10.595 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:10 vm00 bash[69512]: audit 2026-03-09T18:49:10.260151+0000 mon.c (mon.1) 443 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:49:10.595 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:10 vm00 bash[69512]: audit 2026-03-09T18:49:10.260151+0000 mon.c (mon.1) 443 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:49:10.595 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:10 vm00 bash[69512]: audit 2026-03-09T18:49:10.265199+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:10.595 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:10 vm00 bash[69512]: audit 2026-03-09T18:49:10.265199+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:10.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:10 vm08 bash[46122]: audit 2026-03-09T18:49:09.237614+0000 mgr.y (mgr.44107) 401 : audit [DBG] from='client.44647 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:10.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:10 vm08 bash[46122]: audit 2026-03-09T18:49:09.237614+0000 mgr.y (mgr.44107) 401 : audit [DBG] from='client.44647 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:10.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:10 vm08 bash[46122]: audit 2026-03-09T18:49:09.760935+0000 mon.c (mon.1) 440 : audit [DBG] from='client.? 192.168.123.100:0/1561760169' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:49:10.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:10 vm08 bash[46122]: audit 2026-03-09T18:49:09.760935+0000 mon.c (mon.1) 440 : audit [DBG] from='client.? 192.168.123.100:0/1561760169' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:49:10.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:10 vm08 bash[46122]: audit 2026-03-09T18:49:10.253941+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:10.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:10 vm08 bash[46122]: audit 2026-03-09T18:49:10.253941+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:10.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:10 vm08 bash[46122]: audit 2026-03-09T18:49:10.257770+0000 mon.c (mon.1) 441 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:10.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:10 vm08 bash[46122]: audit 2026-03-09T18:49:10.257770+0000 mon.c (mon.1) 441 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:10.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:10 vm08 bash[46122]: audit 2026-03-09T18:49:10.259234+0000 mon.c (mon.1) 442 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:10.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:10 vm08 bash[46122]: audit 2026-03-09T18:49:10.259234+0000 mon.c (mon.1) 442 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:10.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:10 vm08 bash[46122]: audit 2026-03-09T18:49:10.260151+0000 mon.c (mon.1) 443 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:49:10.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:10 vm08 bash[46122]: audit 2026-03-09T18:49:10.260151+0000 mon.c (mon.1) 443 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:49:10.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:10 vm08 bash[46122]: audit 2026-03-09T18:49:10.265199+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:10.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:10 vm08 bash[46122]: audit 2026-03-09T18:49:10.265199+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:10.805 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:49:11.175 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:49:11.175 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (19m) 25s ago 26m 14.6M - 0.25.0 c8568f914cd2 2a8a29ecee54 2026-03-09T18:49:11.175 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (6m) 25s ago 26m 66.7M - 10.4.0 c8b91775d855 5e0e30d27ab2 2026-03-09T18:49:11.175 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (7m) 25s ago 25m 44.6M - 3.5 e1d6a67b021e ff3da66cebe9 2026-03-09T18:49:11.175 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443,9283,8765 running (7m) 25s ago 29m 466M - 19.2.3-678-ge911bdeb 654f31e6858e e51f08afe84e 2026-03-09T18:49:11.176 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:8443,9283,8765 running (16m) 25s ago 29m 537M - 19.2.3-678-ge911bdeb 654f31e6858e 838266ced7b2 2026-03-09T18:49:11.176 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (5m) 25s ago 29m 56.0M 2048M 19.2.3-678-ge911bdeb 654f31e6858e eb9fca83668a 2026-03-09T18:49:11.176 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (6m) 25s ago 29m 51.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a343d2673f4 2026-03-09T18:49:11.176 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (5m) 25s ago 29m 52.8M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 5c50cbcbef6b 2026-03-09T18:49:11.176 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (19m) 25s ago 26m 7808k - 1.7.0 72c9c2088986 c2e3e3202fde 2026-03-09T18:49:11.176 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (19m) 25s ago 26m 8267k - 1.7.0 72c9c2088986 7a7d1ed8c801 2026-03-09T18:49:11.176 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (3m) 25s ago 28m 53.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1334681baf1a 2026-03-09T18:49:11.176 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (2m) 25s ago 28m 52.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e b0cddb861a9d 2026-03-09T18:49:11.176 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (4m) 25s ago 28m 51.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9a838e294e64 2026-03-09T18:49:11.176 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (4m) 25s ago 28m 76.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 161fbb574888 2026-03-09T18:49:11.176 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (2m) 25s ago 27m 56.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7575a2bf51cd 2026-03-09T18:49:11.176 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (112s) 25s ago 27m 71.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9263a2afad40 2026-03-09T18:49:11.176 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (97s) 25s ago 27m 48.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e b5db37a03fe5 2026-03-09T18:49:11.176 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (82s) 25s ago 27m 69.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9904fad47d23 2026-03-09T18:49:11.176 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (7m) 25s ago 26m 47.5M - 2.51.0 1d3b7f56885b 2dc1789f7bf0 2026-03-09T18:49:11.176 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (32s) 25s ago 25m 90.9M - 19.2.3-678-ge911bdeb 654f31e6858e c812b26432aa 2026-03-09T18:49:11.176 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (30s) 25s ago 25m 90.9M - 19.2.3-678-ge911bdeb 654f31e6858e a1f2a8ce96e5 2026-03-09T18:49:11.403 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:49:11.403 INFO:teuthology.orchestra.run.vm00.stdout: "mon": { 2026-03-09T18:49:11.404 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-09T18:49:11.404 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:49:11.404 INFO:teuthology.orchestra.run.vm00.stdout: "mgr": { 2026-03-09T18:49:11.404 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:49:11.404 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:49:11.404 INFO:teuthology.orchestra.run.vm00.stdout: "osd": { 2026-03-09T18:49:11.404 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 8 2026-03-09T18:49:11.404 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:49:11.404 INFO:teuthology.orchestra.run.vm00.stdout: "rgw": { 2026-03-09T18:49:11.404 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:49:11.404 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:49:11.404 INFO:teuthology.orchestra.run.vm00.stdout: "overall": { 2026-03-09T18:49:11.404 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 15 2026-03-09T18:49:11.404 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:49:11.404 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:49:11.597 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:49:11.597 INFO:teuthology.orchestra.run.vm00.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-09T18:49:11.597 INFO:teuthology.orchestra.run.vm00.stdout: "in_progress": true, 2026-03-09T18:49:11.597 INFO:teuthology.orchestra.run.vm00.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T18:49:11.597 INFO:teuthology.orchestra.run.vm00.stdout: "services_complete": [], 2026-03-09T18:49:11.598 INFO:teuthology.orchestra.run.vm00.stdout: "progress": "", 2026-03-09T18:49:11.598 INFO:teuthology.orchestra.run.vm00.stdout: "message": "Doing first pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df image", 2026-03-09T18:49:11.598 INFO:teuthology.orchestra.run.vm00.stdout: "is_paused": false 2026-03-09T18:49:11.598 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:49:11.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:11 vm08 bash[46122]: cluster 2026-03-09T18:49:10.119071+0000 mgr.y (mgr.44107) 402 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:11.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:11 vm08 bash[46122]: cluster 2026-03-09T18:49:10.119071+0000 mgr.y (mgr.44107) 402 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:11.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:11 vm08 bash[46122]: audit 2026-03-09T18:49:10.249010+0000 mgr.y (mgr.44107) 403 : audit [DBG] from='client.44656 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:11.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:11 vm08 bash[46122]: audit 2026-03-09T18:49:10.249010+0000 mgr.y (mgr.44107) 403 : audit [DBG] from='client.44656 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:11.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:11 vm08 bash[46122]: cephadm 2026-03-09T18:49:10.249465+0000 mgr.y (mgr.44107) 404 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:49:11.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:11 vm08 bash[46122]: cephadm 2026-03-09T18:49:10.249465+0000 mgr.y (mgr.44107) 404 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:49:11.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:11 vm08 bash[46122]: cephadm 2026-03-09T18:49:10.314342+0000 mgr.y (mgr.44107) 405 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:49:11.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:11 vm08 bash[46122]: cephadm 2026-03-09T18:49:10.314342+0000 mgr.y (mgr.44107) 405 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:49:11.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:11 vm08 bash[46122]: audit 2026-03-09T18:49:11.407182+0000 mon.c (mon.1) 444 : audit [DBG] from='client.? 192.168.123.100:0/3985829718' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:11.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:11 vm08 bash[46122]: audit 2026-03-09T18:49:11.407182+0000 mon.c (mon.1) 444 : audit [DBG] from='client.? 192.168.123.100:0/3985829718' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:11.747 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:11 vm00 bash[69512]: cluster 2026-03-09T18:49:10.119071+0000 mgr.y (mgr.44107) 402 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:11.747 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:11 vm00 bash[69512]: cluster 2026-03-09T18:49:10.119071+0000 mgr.y (mgr.44107) 402 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:11.747 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:11 vm00 bash[69512]: audit 2026-03-09T18:49:10.249010+0000 mgr.y (mgr.44107) 403 : audit [DBG] from='client.44656 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:11.747 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:11 vm00 bash[69512]: audit 2026-03-09T18:49:10.249010+0000 mgr.y (mgr.44107) 403 : audit [DBG] from='client.44656 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:11.747 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:11 vm00 bash[69512]: cephadm 2026-03-09T18:49:10.249465+0000 mgr.y (mgr.44107) 404 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:49:11.747 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:11 vm00 bash[69512]: cephadm 2026-03-09T18:49:10.249465+0000 mgr.y (mgr.44107) 404 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:49:11.747 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:11 vm00 bash[69512]: cephadm 2026-03-09T18:49:10.314342+0000 mgr.y (mgr.44107) 405 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:49:11.747 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:11 vm00 bash[69512]: cephadm 2026-03-09T18:49:10.314342+0000 mgr.y (mgr.44107) 405 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:49:11.747 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:11 vm00 bash[69512]: audit 2026-03-09T18:49:11.407182+0000 mon.c (mon.1) 444 : audit [DBG] from='client.? 192.168.123.100:0/3985829718' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:11.747 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:11 vm00 bash[69512]: audit 2026-03-09T18:49:11.407182+0000 mon.c (mon.1) 444 : audit [DBG] from='client.? 192.168.123.100:0/3985829718' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:11.747 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:11 vm00 bash[65531]: cluster 2026-03-09T18:49:10.119071+0000 mgr.y (mgr.44107) 402 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:11.747 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:11 vm00 bash[65531]: cluster 2026-03-09T18:49:10.119071+0000 mgr.y (mgr.44107) 402 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:11.747 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:11 vm00 bash[65531]: audit 2026-03-09T18:49:10.249010+0000 mgr.y (mgr.44107) 403 : audit [DBG] from='client.44656 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:11.747 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:11 vm00 bash[65531]: audit 2026-03-09T18:49:10.249010+0000 mgr.y (mgr.44107) 403 : audit [DBG] from='client.44656 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:11.747 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:11 vm00 bash[65531]: cephadm 2026-03-09T18:49:10.249465+0000 mgr.y (mgr.44107) 404 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:49:11.747 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:11 vm00 bash[65531]: cephadm 2026-03-09T18:49:10.249465+0000 mgr.y (mgr.44107) 404 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:49:11.747 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:11 vm00 bash[65531]: cephadm 2026-03-09T18:49:10.314342+0000 mgr.y (mgr.44107) 405 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:49:11.747 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:11 vm00 bash[65531]: cephadm 2026-03-09T18:49:10.314342+0000 mgr.y (mgr.44107) 405 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:49:11.747 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:11 vm00 bash[65531]: audit 2026-03-09T18:49:11.407182+0000 mon.c (mon.1) 444 : audit [DBG] from='client.? 192.168.123.100:0/3985829718' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:11.747 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:11 vm00 bash[65531]: audit 2026-03-09T18:49:11.407182+0000 mon.c (mon.1) 444 : audit [DBG] from='client.? 192.168.123.100:0/3985829718' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:11.927 INFO:teuthology.orchestra.run.vm00.stdout:HEALTH_OK 2026-03-09T18:49:12.454 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:10.800127+0000 mgr.y (mgr.44107) 406 : audit [DBG] from='client.34538 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:12.454 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:10.800127+0000 mgr.y (mgr.44107) 406 : audit [DBG] from='client.34538 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:12.454 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:10.989316+0000 mgr.y (mgr.44107) 407 : audit [DBG] from='client.54642 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:10.989316+0000 mgr.y (mgr.44107) 407 : audit [DBG] from='client.54642 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.175349+0000 mgr.y (mgr.44107) 408 : audit [DBG] from='client.54645 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.175349+0000 mgr.y (mgr.44107) 408 : audit [DBG] from='client.54645 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.808022+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.808022+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.811943+0000 mon.c (mon.1) 445 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.811943+0000 mon.c (mon.1) 445 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.813203+0000 mon.c (mon.1) 446 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.813203+0000 mon.c (mon.1) 446 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.816980+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.816980+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.819796+0000 mon.c (mon.1) 447 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.819796+0000 mon.c (mon.1) 447 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.823633+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.823633+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.826468+0000 mon.c (mon.1) 448 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.826468+0000 mon.c (mon.1) 448 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.829549+0000 mon.a (mon.0) 645 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.829549+0000 mon.a (mon.0) 645 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.832581+0000 mon.c (mon.1) 449 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.832581+0000 mon.c (mon.1) 449 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.836246+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:10.800127+0000 mgr.y (mgr.44107) 406 : audit [DBG] from='client.34538 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:10.800127+0000 mgr.y (mgr.44107) 406 : audit [DBG] from='client.34538 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:10.989316+0000 mgr.y (mgr.44107) 407 : audit [DBG] from='client.54642 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:10.989316+0000 mgr.y (mgr.44107) 407 : audit [DBG] from='client.54642 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.175349+0000 mgr.y (mgr.44107) 408 : audit [DBG] from='client.54645 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.175349+0000 mgr.y (mgr.44107) 408 : audit [DBG] from='client.54645 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.808022+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.808022+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.811943+0000 mon.c (mon.1) 445 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.811943+0000 mon.c (mon.1) 445 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.813203+0000 mon.c (mon.1) 446 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.813203+0000 mon.c (mon.1) 446 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.816980+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.816980+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.819796+0000 mon.c (mon.1) 447 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.819796+0000 mon.c (mon.1) 447 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.823633+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.823633+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.826468+0000 mon.c (mon.1) 448 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.826468+0000 mon.c (mon.1) 448 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.829549+0000 mon.a (mon.0) 645 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.829549+0000 mon.a (mon.0) 645 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.832581+0000 mon.c (mon.1) 449 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.832581+0000 mon.c (mon.1) 449 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.836246+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.836246+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.840332+0000 mon.c (mon.1) 450 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.840332+0000 mon.c (mon.1) 450 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.843803+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.843803+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.846854+0000 mon.c (mon.1) 451 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.846854+0000 mon.c (mon.1) 451 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.850218+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.850218+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.852961+0000 mon.c (mon.1) 452 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.456 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.852961+0000 mon.c (mon.1) 452 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.456 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.856415+0000 mon.a (mon.0) 649 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.456 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.856415+0000 mon.a (mon.0) 649 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.456 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.859231+0000 mon.c (mon.1) 453 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.456 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.859231+0000 mon.c (mon.1) 453 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.456 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.861042+0000 mon.c (mon.1) 454 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.456 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.861042+0000 mon.c (mon.1) 454 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.456 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.864345+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.456 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.864345+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.456 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.930057+0000 mon.a (mon.0) 651 : audit [DBG] from='client.? 192.168.123.100:0/1250439335' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:49:12.456 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:11.930057+0000 mon.a (mon.0) 651 : audit [DBG] from='client.? 192.168.123.100:0/1250439335' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:49:12.456 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:12.254736+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.456 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:12.254736+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.456 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:12.258298+0000 mon.c (mon.1) 455 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:49:12.456 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:12.258298+0000 mon.c (mon.1) 455 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:49:12.456 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:12.258532+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:49:12.456 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:12.258532+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:49:12.456 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:12.262169+0000 mon.c (mon.1) 456 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:12.456 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 bash[65531]: audit 2026-03-09T18:49:12.262169+0000 mon.c (mon.1) 456 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.836246+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.840332+0000 mon.c (mon.1) 450 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.840332+0000 mon.c (mon.1) 450 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.843803+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.843803+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.846854+0000 mon.c (mon.1) 451 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.846854+0000 mon.c (mon.1) 451 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.850218+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.850218+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.852961+0000 mon.c (mon.1) 452 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.852961+0000 mon.c (mon.1) 452 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.856415+0000 mon.a (mon.0) 649 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.856415+0000 mon.a (mon.0) 649 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.859231+0000 mon.c (mon.1) 453 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.859231+0000 mon.c (mon.1) 453 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.861042+0000 mon.c (mon.1) 454 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.861042+0000 mon.c (mon.1) 454 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.864345+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.864345+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.930057+0000 mon.a (mon.0) 651 : audit [DBG] from='client.? 192.168.123.100:0/1250439335' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:11.930057+0000 mon.a (mon.0) 651 : audit [DBG] from='client.? 192.168.123.100:0/1250439335' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:12.254736+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:12.254736+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:12.258298+0000 mon.c (mon.1) 455 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:12.258298+0000 mon.c (mon.1) 455 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:12.258532+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:12.258532+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:12.262169+0000 mon.c (mon.1) 456 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:12.709 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 bash[69512]: audit 2026-03-09T18:49:12.262169+0000 mon.c (mon.1) 456 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:10.800127+0000 mgr.y (mgr.44107) 406 : audit [DBG] from='client.34538 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:10.800127+0000 mgr.y (mgr.44107) 406 : audit [DBG] from='client.34538 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:10.989316+0000 mgr.y (mgr.44107) 407 : audit [DBG] from='client.54642 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:10.989316+0000 mgr.y (mgr.44107) 407 : audit [DBG] from='client.54642 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.175349+0000 mgr.y (mgr.44107) 408 : audit [DBG] from='client.54645 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.175349+0000 mgr.y (mgr.44107) 408 : audit [DBG] from='client.54645 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.808022+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.808022+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.811943+0000 mon.c (mon.1) 445 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.811943+0000 mon.c (mon.1) 445 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.813203+0000 mon.c (mon.1) 446 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.813203+0000 mon.c (mon.1) 446 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.816980+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.816980+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.819796+0000 mon.c (mon.1) 447 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.819796+0000 mon.c (mon.1) 447 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.823633+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.823633+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.826468+0000 mon.c (mon.1) 448 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.826468+0000 mon.c (mon.1) 448 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.829549+0000 mon.a (mon.0) 645 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.829549+0000 mon.a (mon.0) 645 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.832581+0000 mon.c (mon.1) 449 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.832581+0000 mon.c (mon.1) 449 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.836246+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.836246+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.840332+0000 mon.c (mon.1) 450 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.840332+0000 mon.c (mon.1) 450 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.843803+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.843803+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.846854+0000 mon.c (mon.1) 451 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.846854+0000 mon.c (mon.1) 451 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.850218+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.850218+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.852961+0000 mon.c (mon.1) 452 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.852961+0000 mon.c (mon.1) 452 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.856415+0000 mon.a (mon.0) 649 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.856415+0000 mon.a (mon.0) 649 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.859231+0000 mon.c (mon.1) 453 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.859231+0000 mon.c (mon.1) 453 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.861042+0000 mon.c (mon.1) 454 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.861042+0000 mon.c (mon.1) 454 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.864345+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.864345+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.930057+0000 mon.a (mon.0) 651 : audit [DBG] from='client.? 192.168.123.100:0/1250439335' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:11.930057+0000 mon.a (mon.0) 651 : audit [DBG] from='client.? 192.168.123.100:0/1250439335' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:12.254736+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:12.254736+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:12.258298+0000 mon.c (mon.1) 455 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:12.258298+0000 mon.c (mon.1) 455 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:12.258532+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:12.258532+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm00.ywhulq", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:12.262169+0000 mon.c (mon.1) 456 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:12.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:12 vm08 bash[46122]: audit 2026-03-09T18:49:12.262169+0000 mon.c (mon.1) 456 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:13.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:12 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:49:13.129 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:49:12 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:49:13.129 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:49:12 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:49:13.129 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:49:12 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:49:13.129 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:49:12 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:49:13.129 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:49:12 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:49:13.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:12 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:49:13.129 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:49:12 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:49:13.129 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:49:12 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:49:13.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: audit 2026-03-09T18:49:11.601304+0000 mgr.y (mgr.44107) 409 : audit [DBG] from='client.44686 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:13.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: audit 2026-03-09T18:49:11.601304+0000 mgr.y (mgr.44107) 409 : audit [DBG] from='client.44686 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:13.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: audit 2026-03-09T18:49:11.673582+0000 mgr.y (mgr.44107) 410 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:13.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: audit 2026-03-09T18:49:11.673582+0000 mgr.y (mgr.44107) 410 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:13.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: cephadm 2026-03-09T18:49:11.810896+0000 mgr.y (mgr.44107) 411 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:49:13.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: cephadm 2026-03-09T18:49:11.810896+0000 mgr.y (mgr.44107) 411 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:49:13.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: cephadm 2026-03-09T18:49:11.810918+0000 mgr.y (mgr.44107) 412 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:49:13.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: cephadm 2026-03-09T18:49:11.810918+0000 mgr.y (mgr.44107) 412 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:49:13.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: cephadm 2026-03-09T18:49:11.813827+0000 mgr.y (mgr.44107) 413 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:49:13.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: cephadm 2026-03-09T18:49:11.813827+0000 mgr.y (mgr.44107) 413 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:49:13.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: cephadm 2026-03-09T18:49:11.820429+0000 mgr.y (mgr.44107) 414 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:49:13.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: cephadm 2026-03-09T18:49:11.820429+0000 mgr.y (mgr.44107) 414 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:49:13.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: cephadm 2026-03-09T18:49:11.827083+0000 mgr.y (mgr.44107) 415 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:49:13.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: cephadm 2026-03-09T18:49:11.827083+0000 mgr.y (mgr.44107) 415 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:49:13.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: cephadm 2026-03-09T18:49:11.833174+0000 mgr.y (mgr.44107) 416 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-09T18:49:13.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: cephadm 2026-03-09T18:49:11.833174+0000 mgr.y (mgr.44107) 416 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-09T18:49:13.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: cephadm 2026-03-09T18:49:11.840957+0000 mgr.y (mgr.44107) 417 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:49:13.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: cephadm 2026-03-09T18:49:11.840957+0000 mgr.y (mgr.44107) 417 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:49:13.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: cephadm 2026-03-09T18:49:11.847466+0000 mgr.y (mgr.44107) 418 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-09T18:49:13.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: cephadm 2026-03-09T18:49:11.847466+0000 mgr.y (mgr.44107) 418 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-09T18:49:13.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: cephadm 2026-03-09T18:49:11.853605+0000 mgr.y (mgr.44107) 419 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:49:13.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: cephadm 2026-03-09T18:49:11.853605+0000 mgr.y (mgr.44107) 419 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:49:13.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: cephadm 2026-03-09T18:49:11.859849+0000 mgr.y (mgr.44107) 420 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-09T18:49:13.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: cephadm 2026-03-09T18:49:11.859849+0000 mgr.y (mgr.44107) 420 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-09T18:49:13.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: cephadm 2026-03-09T18:49:11.861667+0000 mgr.y (mgr.44107) 421 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:49:13.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: cephadm 2026-03-09T18:49:11.861667+0000 mgr.y (mgr.44107) 421 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:49:13.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: cluster 2026-03-09T18:49:12.119509+0000 mgr.y (mgr.44107) 422 : cluster [DBG] pgmap v212: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:13.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: cluster 2026-03-09T18:49:12.119509+0000 mgr.y (mgr.44107) 422 : cluster [DBG] pgmap v212: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:13.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: cephadm 2026-03-09T18:49:12.249881+0000 mgr.y (mgr.44107) 423 : cephadm [INF] Upgrade: Updating iscsi.foo.vm00.ywhulq 2026-03-09T18:49:13.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: cephadm 2026-03-09T18:49:12.249881+0000 mgr.y (mgr.44107) 423 : cephadm [INF] Upgrade: Updating iscsi.foo.vm00.ywhulq 2026-03-09T18:49:13.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: cephadm 2026-03-09T18:49:12.262941+0000 mgr.y (mgr.44107) 424 : cephadm [INF] Deploying daemon iscsi.foo.vm00.ywhulq on vm00 2026-03-09T18:49:13.725 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:13 vm08 bash[46122]: cephadm 2026-03-09T18:49:12.262941+0000 mgr.y (mgr.44107) 424 : cephadm [INF] Deploying daemon iscsi.foo.vm00.ywhulq on vm00 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: audit 2026-03-09T18:49:11.601304+0000 mgr.y (mgr.44107) 409 : audit [DBG] from='client.44686 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: audit 2026-03-09T18:49:11.601304+0000 mgr.y (mgr.44107) 409 : audit [DBG] from='client.44686 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: audit 2026-03-09T18:49:11.673582+0000 mgr.y (mgr.44107) 410 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: audit 2026-03-09T18:49:11.673582+0000 mgr.y (mgr.44107) 410 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: cephadm 2026-03-09T18:49:11.810896+0000 mgr.y (mgr.44107) 411 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: cephadm 2026-03-09T18:49:11.810896+0000 mgr.y (mgr.44107) 411 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: cephadm 2026-03-09T18:49:11.810918+0000 mgr.y (mgr.44107) 412 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: cephadm 2026-03-09T18:49:11.810918+0000 mgr.y (mgr.44107) 412 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: cephadm 2026-03-09T18:49:11.813827+0000 mgr.y (mgr.44107) 413 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: cephadm 2026-03-09T18:49:11.813827+0000 mgr.y (mgr.44107) 413 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: cephadm 2026-03-09T18:49:11.820429+0000 mgr.y (mgr.44107) 414 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: cephadm 2026-03-09T18:49:11.820429+0000 mgr.y (mgr.44107) 414 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: cephadm 2026-03-09T18:49:11.827083+0000 mgr.y (mgr.44107) 415 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: cephadm 2026-03-09T18:49:11.827083+0000 mgr.y (mgr.44107) 415 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: cephadm 2026-03-09T18:49:11.833174+0000 mgr.y (mgr.44107) 416 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: cephadm 2026-03-09T18:49:11.833174+0000 mgr.y (mgr.44107) 416 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: cephadm 2026-03-09T18:49:11.840957+0000 mgr.y (mgr.44107) 417 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: cephadm 2026-03-09T18:49:11.840957+0000 mgr.y (mgr.44107) 417 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: cephadm 2026-03-09T18:49:11.847466+0000 mgr.y (mgr.44107) 418 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: cephadm 2026-03-09T18:49:11.847466+0000 mgr.y (mgr.44107) 418 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: cephadm 2026-03-09T18:49:11.853605+0000 mgr.y (mgr.44107) 419 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: cephadm 2026-03-09T18:49:11.853605+0000 mgr.y (mgr.44107) 419 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: cephadm 2026-03-09T18:49:11.859849+0000 mgr.y (mgr.44107) 420 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: cephadm 2026-03-09T18:49:11.859849+0000 mgr.y (mgr.44107) 420 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: cephadm 2026-03-09T18:49:11.861667+0000 mgr.y (mgr.44107) 421 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: cephadm 2026-03-09T18:49:11.861667+0000 mgr.y (mgr.44107) 421 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: cluster 2026-03-09T18:49:12.119509+0000 mgr.y (mgr.44107) 422 : cluster [DBG] pgmap v212: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: cluster 2026-03-09T18:49:12.119509+0000 mgr.y (mgr.44107) 422 : cluster [DBG] pgmap v212: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: cephadm 2026-03-09T18:49:12.249881+0000 mgr.y (mgr.44107) 423 : cephadm [INF] Upgrade: Updating iscsi.foo.vm00.ywhulq 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: cephadm 2026-03-09T18:49:12.249881+0000 mgr.y (mgr.44107) 423 : cephadm [INF] Upgrade: Updating iscsi.foo.vm00.ywhulq 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: cephadm 2026-03-09T18:49:12.262941+0000 mgr.y (mgr.44107) 424 : cephadm [INF] Deploying daemon iscsi.foo.vm00.ywhulq on vm00 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:13 vm00 bash[69512]: cephadm 2026-03-09T18:49:12.262941+0000 mgr.y (mgr.44107) 424 : cephadm [INF] Deploying daemon iscsi.foo.vm00.ywhulq on vm00 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: audit 2026-03-09T18:49:11.601304+0000 mgr.y (mgr.44107) 409 : audit [DBG] from='client.44686 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: audit 2026-03-09T18:49:11.601304+0000 mgr.y (mgr.44107) 409 : audit [DBG] from='client.44686 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: audit 2026-03-09T18:49:11.673582+0000 mgr.y (mgr.44107) 410 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: audit 2026-03-09T18:49:11.673582+0000 mgr.y (mgr.44107) 410 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: cephadm 2026-03-09T18:49:11.810896+0000 mgr.y (mgr.44107) 411 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: cephadm 2026-03-09T18:49:11.810896+0000 mgr.y (mgr.44107) 411 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: cephadm 2026-03-09T18:49:11.810918+0000 mgr.y (mgr.44107) 412 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:49:13.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: cephadm 2026-03-09T18:49:11.810918+0000 mgr.y (mgr.44107) 412 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T18:49:13.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: cephadm 2026-03-09T18:49:11.813827+0000 mgr.y (mgr.44107) 413 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:49:13.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: cephadm 2026-03-09T18:49:11.813827+0000 mgr.y (mgr.44107) 413 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T18:49:13.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: cephadm 2026-03-09T18:49:11.820429+0000 mgr.y (mgr.44107) 414 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:49:13.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: cephadm 2026-03-09T18:49:11.820429+0000 mgr.y (mgr.44107) 414 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T18:49:13.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: cephadm 2026-03-09T18:49:11.827083+0000 mgr.y (mgr.44107) 415 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:49:13.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: cephadm 2026-03-09T18:49:11.827083+0000 mgr.y (mgr.44107) 415 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T18:49:13.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: cephadm 2026-03-09T18:49:11.833174+0000 mgr.y (mgr.44107) 416 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-09T18:49:13.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: cephadm 2026-03-09T18:49:11.833174+0000 mgr.y (mgr.44107) 416 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-09T18:49:13.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: cephadm 2026-03-09T18:49:11.840957+0000 mgr.y (mgr.44107) 417 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:49:13.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: cephadm 2026-03-09T18:49:11.840957+0000 mgr.y (mgr.44107) 417 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T18:49:13.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: cephadm 2026-03-09T18:49:11.847466+0000 mgr.y (mgr.44107) 418 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-09T18:49:13.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: cephadm 2026-03-09T18:49:11.847466+0000 mgr.y (mgr.44107) 418 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-09T18:49:13.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: cephadm 2026-03-09T18:49:11.853605+0000 mgr.y (mgr.44107) 419 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:49:13.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: cephadm 2026-03-09T18:49:11.853605+0000 mgr.y (mgr.44107) 419 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T18:49:13.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: cephadm 2026-03-09T18:49:11.859849+0000 mgr.y (mgr.44107) 420 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-09T18:49:13.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: cephadm 2026-03-09T18:49:11.859849+0000 mgr.y (mgr.44107) 420 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-09T18:49:13.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: cephadm 2026-03-09T18:49:11.861667+0000 mgr.y (mgr.44107) 421 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:49:13.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: cephadm 2026-03-09T18:49:11.861667+0000 mgr.y (mgr.44107) 421 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T18:49:13.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: cluster 2026-03-09T18:49:12.119509+0000 mgr.y (mgr.44107) 422 : cluster [DBG] pgmap v212: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:13.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: cluster 2026-03-09T18:49:12.119509+0000 mgr.y (mgr.44107) 422 : cluster [DBG] pgmap v212: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:13.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: cephadm 2026-03-09T18:49:12.249881+0000 mgr.y (mgr.44107) 423 : cephadm [INF] Upgrade: Updating iscsi.foo.vm00.ywhulq 2026-03-09T18:49:13.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: cephadm 2026-03-09T18:49:12.249881+0000 mgr.y (mgr.44107) 423 : cephadm [INF] Upgrade: Updating iscsi.foo.vm00.ywhulq 2026-03-09T18:49:13.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: cephadm 2026-03-09T18:49:12.262941+0000 mgr.y (mgr.44107) 424 : cephadm [INF] Deploying daemon iscsi.foo.vm00.ywhulq on vm00 2026-03-09T18:49:13.880 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:13 vm00 bash[65531]: cephadm 2026-03-09T18:49:12.262941+0000 mgr.y (mgr.44107) 424 : cephadm [INF] Deploying daemon iscsi.foo.vm00.ywhulq on vm00 2026-03-09T18:49:14.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:14 vm08 bash[46122]: cluster 2026-03-09T18:49:14.119813+0000 mgr.y (mgr.44107) 425 : cluster [DBG] pgmap v213: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:14.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:14 vm08 bash[46122]: cluster 2026-03-09T18:49:14.119813+0000 mgr.y (mgr.44107) 425 : cluster [DBG] pgmap v213: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:14.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:14 vm00 bash[65531]: cluster 2026-03-09T18:49:14.119813+0000 mgr.y (mgr.44107) 425 : cluster [DBG] pgmap v213: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:14.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:14 vm00 bash[65531]: cluster 2026-03-09T18:49:14.119813+0000 mgr.y (mgr.44107) 425 : cluster [DBG] pgmap v213: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:14.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:14 vm00 bash[69512]: cluster 2026-03-09T18:49:14.119813+0000 mgr.y (mgr.44107) 425 : cluster [DBG] pgmap v213: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:14.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:14 vm00 bash[69512]: cluster 2026-03-09T18:49:14.119813+0000 mgr.y (mgr.44107) 425 : cluster [DBG] pgmap v213: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:17.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:17 vm08 bash[46122]: cluster 2026-03-09T18:49:16.120215+0000 mgr.y (mgr.44107) 426 : cluster [DBG] pgmap v214: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:17.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:17 vm08 bash[46122]: cluster 2026-03-09T18:49:16.120215+0000 mgr.y (mgr.44107) 426 : cluster [DBG] pgmap v214: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:17 vm00 bash[65531]: cluster 2026-03-09T18:49:16.120215+0000 mgr.y (mgr.44107) 426 : cluster [DBG] pgmap v214: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:17 vm00 bash[65531]: cluster 2026-03-09T18:49:16.120215+0000 mgr.y (mgr.44107) 426 : cluster [DBG] pgmap v214: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:17 vm00 bash[69512]: cluster 2026-03-09T18:49:16.120215+0000 mgr.y (mgr.44107) 426 : cluster [DBG] pgmap v214: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:17 vm00 bash[69512]: cluster 2026-03-09T18:49:16.120215+0000 mgr.y (mgr.44107) 426 : cluster [DBG] pgmap v214: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:18.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:18 vm08 bash[46122]: audit 2026-03-09T18:49:18.102932+0000 mon.c (mon.1) 457 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:49:18.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:18 vm08 bash[46122]: audit 2026-03-09T18:49:18.102932+0000 mon.c (mon.1) 457 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:49:18.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:18 vm00 bash[65531]: audit 2026-03-09T18:49:18.102932+0000 mon.c (mon.1) 457 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:49:18.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:18 vm00 bash[65531]: audit 2026-03-09T18:49:18.102932+0000 mon.c (mon.1) 457 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:49:18.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:18 vm00 bash[69512]: audit 2026-03-09T18:49:18.102932+0000 mon.c (mon.1) 457 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:49:18.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:18 vm00 bash[69512]: audit 2026-03-09T18:49:18.102932+0000 mon.c (mon.1) 457 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:49:19.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:19 vm08 bash[46122]: cluster 2026-03-09T18:49:18.120610+0000 mgr.y (mgr.44107) 427 : cluster [DBG] pgmap v215: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:19.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:19 vm08 bash[46122]: cluster 2026-03-09T18:49:18.120610+0000 mgr.y (mgr.44107) 427 : cluster [DBG] pgmap v215: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:19.524 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:19 vm00 bash[65531]: cluster 2026-03-09T18:49:18.120610+0000 mgr.y (mgr.44107) 427 : cluster [DBG] pgmap v215: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:19.524 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:19 vm00 bash[65531]: cluster 2026-03-09T18:49:18.120610+0000 mgr.y (mgr.44107) 427 : cluster [DBG] pgmap v215: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:19.524 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:19 vm00 bash[69512]: cluster 2026-03-09T18:49:18.120610+0000 mgr.y (mgr.44107) 427 : cluster [DBG] pgmap v215: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:19.524 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:19 vm00 bash[69512]: cluster 2026-03-09T18:49:18.120610+0000 mgr.y (mgr.44107) 427 : cluster [DBG] pgmap v215: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:19.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:49:19 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:49:19] "GET /metrics HTTP/1.1" 200 37990 "" "Prometheus/2.51.0" 2026-03-09T18:49:21.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:21 vm08 bash[46122]: cluster 2026-03-09T18:49:20.121030+0000 mgr.y (mgr.44107) 428 : cluster [DBG] pgmap v216: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:21.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:21 vm08 bash[46122]: cluster 2026-03-09T18:49:20.121030+0000 mgr.y (mgr.44107) 428 : cluster [DBG] pgmap v216: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:21.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:21 vm00 bash[65531]: cluster 2026-03-09T18:49:20.121030+0000 mgr.y (mgr.44107) 428 : cluster [DBG] pgmap v216: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:21.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:21 vm00 bash[65531]: cluster 2026-03-09T18:49:20.121030+0000 mgr.y (mgr.44107) 428 : cluster [DBG] pgmap v216: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:21.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:21 vm00 bash[69512]: cluster 2026-03-09T18:49:20.121030+0000 mgr.y (mgr.44107) 428 : cluster [DBG] pgmap v216: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:21.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:21 vm00 bash[69512]: cluster 2026-03-09T18:49:20.121030+0000 mgr.y (mgr.44107) 428 : cluster [DBG] pgmap v216: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:23.129 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:23 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:49:23.130 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:49:23 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:49:23.130 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:23 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:49:23.130 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:49:23 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:49:23.130 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:49:23 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:49:23.130 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:49:23 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:49:23.130 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:49:23 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:49:23.130 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:49:23 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:49:23.130 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:49:23 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:49:23.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:23 vm08 bash[46122]: audit 2026-03-09T18:49:21.681618+0000 mgr.y (mgr.44107) 429 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:23.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:23 vm08 bash[46122]: audit 2026-03-09T18:49:21.681618+0000 mgr.y (mgr.44107) 429 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:23.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:23 vm08 bash[46122]: cluster 2026-03-09T18:49:22.121553+0000 mgr.y (mgr.44107) 430 : cluster [DBG] pgmap v217: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:23.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:23 vm08 bash[46122]: cluster 2026-03-09T18:49:22.121553+0000 mgr.y (mgr.44107) 430 : cluster [DBG] pgmap v217: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:23.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:23 vm08 bash[46122]: audit 2026-03-09T18:49:23.183356+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:23.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:23 vm08 bash[46122]: audit 2026-03-09T18:49:23.183356+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:23.481 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:23 vm00 bash[69512]: audit 2026-03-09T18:49:21.681618+0000 mgr.y (mgr.44107) 429 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:23.481 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:23 vm00 bash[69512]: audit 2026-03-09T18:49:21.681618+0000 mgr.y (mgr.44107) 429 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:23.481 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:23 vm00 bash[69512]: cluster 2026-03-09T18:49:22.121553+0000 mgr.y (mgr.44107) 430 : cluster [DBG] pgmap v217: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:23.481 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:23 vm00 bash[69512]: cluster 2026-03-09T18:49:22.121553+0000 mgr.y (mgr.44107) 430 : cluster [DBG] pgmap v217: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:23.481 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:23 vm00 bash[69512]: audit 2026-03-09T18:49:23.183356+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:23.481 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:23 vm00 bash[69512]: audit 2026-03-09T18:49:23.183356+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:23.482 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:23 vm00 bash[65531]: audit 2026-03-09T18:49:21.681618+0000 mgr.y (mgr.44107) 429 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:23.482 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:23 vm00 bash[65531]: audit 2026-03-09T18:49:21.681618+0000 mgr.y (mgr.44107) 429 : audit [DBG] from='client.25132 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:23.482 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:23 vm00 bash[65531]: cluster 2026-03-09T18:49:22.121553+0000 mgr.y (mgr.44107) 430 : cluster [DBG] pgmap v217: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:23.482 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:23 vm00 bash[65531]: cluster 2026-03-09T18:49:22.121553+0000 mgr.y (mgr.44107) 430 : cluster [DBG] pgmap v217: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:23.482 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:23 vm00 bash[65531]: audit 2026-03-09T18:49:23.183356+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:23.482 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:23 vm00 bash[65531]: audit 2026-03-09T18:49:23.183356+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:24.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:24 vm08 bash[46122]: audit 2026-03-09T18:49:23.202725+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:24.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:24 vm08 bash[46122]: audit 2026-03-09T18:49:23.202725+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:24.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:24 vm08 bash[46122]: audit 2026-03-09T18:49:23.204494+0000 mon.c (mon.1) 458 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:24.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:24 vm08 bash[46122]: audit 2026-03-09T18:49:23.204494+0000 mon.c (mon.1) 458 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:24.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:24 vm08 bash[46122]: audit 2026-03-09T18:49:23.650843+0000 mon.c (mon.1) 459 : audit [DBG] from='client.? 192.168.123.100:0/3727826256' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T18:49:24.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:24 vm08 bash[46122]: audit 2026-03-09T18:49:23.650843+0000 mon.c (mon.1) 459 : audit [DBG] from='client.? 192.168.123.100:0/3727826256' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T18:49:24.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:24 vm08 bash[46122]: audit 2026-03-09T18:49:23.810996+0000 mon.b (mon.2) 26 : audit [INF] from='client.? 192.168.123.100:0/2498099177' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1752662253"}]: dispatch 2026-03-09T18:49:24.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:24 vm08 bash[46122]: audit 2026-03-09T18:49:23.810996+0000 mon.b (mon.2) 26 : audit [INF] from='client.? 192.168.123.100:0/2498099177' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1752662253"}]: dispatch 2026-03-09T18:49:24.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:24 vm08 bash[46122]: audit 2026-03-09T18:49:23.814471+0000 mon.a (mon.0) 656 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1752662253"}]: dispatch 2026-03-09T18:49:24.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:24 vm08 bash[46122]: audit 2026-03-09T18:49:23.814471+0000 mon.a (mon.0) 656 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1752662253"}]: dispatch 2026-03-09T18:49:24.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:24 vm00 bash[65531]: audit 2026-03-09T18:49:23.202725+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:24.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:24 vm00 bash[65531]: audit 2026-03-09T18:49:23.202725+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:24.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:24 vm00 bash[65531]: audit 2026-03-09T18:49:23.204494+0000 mon.c (mon.1) 458 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:24.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:24 vm00 bash[65531]: audit 2026-03-09T18:49:23.204494+0000 mon.c (mon.1) 458 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:24.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:24 vm00 bash[65531]: audit 2026-03-09T18:49:23.650843+0000 mon.c (mon.1) 459 : audit [DBG] from='client.? 192.168.123.100:0/3727826256' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T18:49:24.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:24 vm00 bash[65531]: audit 2026-03-09T18:49:23.650843+0000 mon.c (mon.1) 459 : audit [DBG] from='client.? 192.168.123.100:0/3727826256' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T18:49:24.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:24 vm00 bash[65531]: audit 2026-03-09T18:49:23.810996+0000 mon.b (mon.2) 26 : audit [INF] from='client.? 192.168.123.100:0/2498099177' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1752662253"}]: dispatch 2026-03-09T18:49:24.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:24 vm00 bash[65531]: audit 2026-03-09T18:49:23.810996+0000 mon.b (mon.2) 26 : audit [INF] from='client.? 192.168.123.100:0/2498099177' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1752662253"}]: dispatch 2026-03-09T18:49:24.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:24 vm00 bash[65531]: audit 2026-03-09T18:49:23.814471+0000 mon.a (mon.0) 656 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1752662253"}]: dispatch 2026-03-09T18:49:24.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:24 vm00 bash[65531]: audit 2026-03-09T18:49:23.814471+0000 mon.a (mon.0) 656 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1752662253"}]: dispatch 2026-03-09T18:49:24.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:24 vm00 bash[69512]: audit 2026-03-09T18:49:23.202725+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:24.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:24 vm00 bash[69512]: audit 2026-03-09T18:49:23.202725+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:24.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:24 vm00 bash[69512]: audit 2026-03-09T18:49:23.204494+0000 mon.c (mon.1) 458 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:24.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:24 vm00 bash[69512]: audit 2026-03-09T18:49:23.204494+0000 mon.c (mon.1) 458 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:24.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:24 vm00 bash[69512]: audit 2026-03-09T18:49:23.650843+0000 mon.c (mon.1) 459 : audit [DBG] from='client.? 192.168.123.100:0/3727826256' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T18:49:24.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:24 vm00 bash[69512]: audit 2026-03-09T18:49:23.650843+0000 mon.c (mon.1) 459 : audit [DBG] from='client.? 192.168.123.100:0/3727826256' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T18:49:24.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:24 vm00 bash[69512]: audit 2026-03-09T18:49:23.810996+0000 mon.b (mon.2) 26 : audit [INF] from='client.? 192.168.123.100:0/2498099177' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1752662253"}]: dispatch 2026-03-09T18:49:24.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:24 vm00 bash[69512]: audit 2026-03-09T18:49:23.810996+0000 mon.b (mon.2) 26 : audit [INF] from='client.? 192.168.123.100:0/2498099177' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1752662253"}]: dispatch 2026-03-09T18:49:24.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:24 vm00 bash[69512]: audit 2026-03-09T18:49:23.814471+0000 mon.a (mon.0) 656 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1752662253"}]: dispatch 2026-03-09T18:49:24.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:24 vm00 bash[69512]: audit 2026-03-09T18:49:23.814471+0000 mon.a (mon.0) 656 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1752662253"}]: dispatch 2026-03-09T18:49:25.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:25 vm08 bash[46122]: cluster 2026-03-09T18:49:24.121875+0000 mgr.y (mgr.44107) 431 : cluster [DBG] pgmap v218: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:25.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:25 vm08 bash[46122]: cluster 2026-03-09T18:49:24.121875+0000 mgr.y (mgr.44107) 431 : cluster [DBG] pgmap v218: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:25.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:25 vm08 bash[46122]: audit 2026-03-09T18:49:24.223099+0000 mon.a (mon.0) 657 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1752662253"}]': finished 2026-03-09T18:49:25.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:25 vm08 bash[46122]: audit 2026-03-09T18:49:24.223099+0000 mon.a (mon.0) 657 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1752662253"}]': finished 2026-03-09T18:49:25.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:25 vm08 bash[46122]: cluster 2026-03-09T18:49:24.242118+0000 mon.a (mon.0) 658 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-09T18:49:25.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:25 vm08 bash[46122]: cluster 2026-03-09T18:49:24.242118+0000 mon.a (mon.0) 658 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-09T18:49:25.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:25 vm08 bash[46122]: audit 2026-03-09T18:49:24.404362+0000 mon.c (mon.1) 460 : audit [INF] from='client.? 192.168.123.100:0/1672570995' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1364179900"}]: dispatch 2026-03-09T18:49:25.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:25 vm08 bash[46122]: audit 2026-03-09T18:49:24.404362+0000 mon.c (mon.1) 460 : audit [INF] from='client.? 192.168.123.100:0/1672570995' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1364179900"}]: dispatch 2026-03-09T18:49:25.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:25 vm08 bash[46122]: audit 2026-03-09T18:49:24.404791+0000 mon.a (mon.0) 659 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1364179900"}]: dispatch 2026-03-09T18:49:25.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:25 vm08 bash[46122]: audit 2026-03-09T18:49:24.404791+0000 mon.a (mon.0) 659 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1364179900"}]: dispatch 2026-03-09T18:49:25.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:25 vm00 bash[65531]: cluster 2026-03-09T18:49:24.121875+0000 mgr.y (mgr.44107) 431 : cluster [DBG] pgmap v218: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:25.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:25 vm00 bash[65531]: cluster 2026-03-09T18:49:24.121875+0000 mgr.y (mgr.44107) 431 : cluster [DBG] pgmap v218: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:25.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:25 vm00 bash[65531]: audit 2026-03-09T18:49:24.223099+0000 mon.a (mon.0) 657 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1752662253"}]': finished 2026-03-09T18:49:25.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:25 vm00 bash[65531]: audit 2026-03-09T18:49:24.223099+0000 mon.a (mon.0) 657 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1752662253"}]': finished 2026-03-09T18:49:25.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:25 vm00 bash[65531]: cluster 2026-03-09T18:49:24.242118+0000 mon.a (mon.0) 658 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-09T18:49:25.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:25 vm00 bash[65531]: cluster 2026-03-09T18:49:24.242118+0000 mon.a (mon.0) 658 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-09T18:49:25.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:25 vm00 bash[65531]: audit 2026-03-09T18:49:24.404362+0000 mon.c (mon.1) 460 : audit [INF] from='client.? 192.168.123.100:0/1672570995' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1364179900"}]: dispatch 2026-03-09T18:49:25.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:25 vm00 bash[65531]: audit 2026-03-09T18:49:24.404362+0000 mon.c (mon.1) 460 : audit [INF] from='client.? 192.168.123.100:0/1672570995' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1364179900"}]: dispatch 2026-03-09T18:49:25.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:25 vm00 bash[65531]: audit 2026-03-09T18:49:24.404791+0000 mon.a (mon.0) 659 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1364179900"}]: dispatch 2026-03-09T18:49:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:25 vm00 bash[65531]: audit 2026-03-09T18:49:24.404791+0000 mon.a (mon.0) 659 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1364179900"}]: dispatch 2026-03-09T18:49:25.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:25 vm00 bash[69512]: cluster 2026-03-09T18:49:24.121875+0000 mgr.y (mgr.44107) 431 : cluster [DBG] pgmap v218: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:25.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:25 vm00 bash[69512]: cluster 2026-03-09T18:49:24.121875+0000 mgr.y (mgr.44107) 431 : cluster [DBG] pgmap v218: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:25.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:25 vm00 bash[69512]: audit 2026-03-09T18:49:24.223099+0000 mon.a (mon.0) 657 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1752662253"}]': finished 2026-03-09T18:49:25.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:25 vm00 bash[69512]: audit 2026-03-09T18:49:24.223099+0000 mon.a (mon.0) 657 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1752662253"}]': finished 2026-03-09T18:49:25.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:25 vm00 bash[69512]: cluster 2026-03-09T18:49:24.242118+0000 mon.a (mon.0) 658 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-09T18:49:25.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:25 vm00 bash[69512]: cluster 2026-03-09T18:49:24.242118+0000 mon.a (mon.0) 658 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-09T18:49:25.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:25 vm00 bash[69512]: audit 2026-03-09T18:49:24.404362+0000 mon.c (mon.1) 460 : audit [INF] from='client.? 192.168.123.100:0/1672570995' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1364179900"}]: dispatch 2026-03-09T18:49:25.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:25 vm00 bash[69512]: audit 2026-03-09T18:49:24.404362+0000 mon.c (mon.1) 460 : audit [INF] from='client.? 192.168.123.100:0/1672570995' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1364179900"}]: dispatch 2026-03-09T18:49:25.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:25 vm00 bash[69512]: audit 2026-03-09T18:49:24.404791+0000 mon.a (mon.0) 659 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1364179900"}]: dispatch 2026-03-09T18:49:25.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:25 vm00 bash[69512]: audit 2026-03-09T18:49:24.404791+0000 mon.a (mon.0) 659 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1364179900"}]: dispatch 2026-03-09T18:49:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:26 vm00 bash[65531]: audit 2026-03-09T18:49:25.231909+0000 mon.a (mon.0) 660 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1364179900"}]': finished 2026-03-09T18:49:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:26 vm00 bash[65531]: audit 2026-03-09T18:49:25.231909+0000 mon.a (mon.0) 660 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1364179900"}]': finished 2026-03-09T18:49:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:26 vm00 bash[65531]: cluster 2026-03-09T18:49:25.238740+0000 mon.a (mon.0) 661 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-09T18:49:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:26 vm00 bash[65531]: cluster 2026-03-09T18:49:25.238740+0000 mon.a (mon.0) 661 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-09T18:49:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:26 vm00 bash[65531]: audit 2026-03-09T18:49:25.443937+0000 mon.c (mon.1) 461 : audit [INF] from='client.? 192.168.123.100:0/877565247' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/4136601387"}]: dispatch 2026-03-09T18:49:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:26 vm00 bash[65531]: audit 2026-03-09T18:49:25.443937+0000 mon.c (mon.1) 461 : audit [INF] from='client.? 192.168.123.100:0/877565247' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/4136601387"}]: dispatch 2026-03-09T18:49:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:26 vm00 bash[65531]: audit 2026-03-09T18:49:25.444324+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/4136601387"}]: dispatch 2026-03-09T18:49:26.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:26 vm00 bash[65531]: audit 2026-03-09T18:49:25.444324+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/4136601387"}]: dispatch 2026-03-09T18:49:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:26 vm00 bash[69512]: audit 2026-03-09T18:49:25.231909+0000 mon.a (mon.0) 660 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1364179900"}]': finished 2026-03-09T18:49:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:26 vm00 bash[69512]: audit 2026-03-09T18:49:25.231909+0000 mon.a (mon.0) 660 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1364179900"}]': finished 2026-03-09T18:49:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:26 vm00 bash[69512]: cluster 2026-03-09T18:49:25.238740+0000 mon.a (mon.0) 661 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-09T18:49:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:26 vm00 bash[69512]: cluster 2026-03-09T18:49:25.238740+0000 mon.a (mon.0) 661 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-09T18:49:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:26 vm00 bash[69512]: audit 2026-03-09T18:49:25.443937+0000 mon.c (mon.1) 461 : audit [INF] from='client.? 192.168.123.100:0/877565247' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/4136601387"}]: dispatch 2026-03-09T18:49:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:26 vm00 bash[69512]: audit 2026-03-09T18:49:25.443937+0000 mon.c (mon.1) 461 : audit [INF] from='client.? 192.168.123.100:0/877565247' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/4136601387"}]: dispatch 2026-03-09T18:49:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:26 vm00 bash[69512]: audit 2026-03-09T18:49:25.444324+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/4136601387"}]: dispatch 2026-03-09T18:49:26.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:26 vm00 bash[69512]: audit 2026-03-09T18:49:25.444324+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/4136601387"}]: dispatch 2026-03-09T18:49:26.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:26 vm08 bash[46122]: audit 2026-03-09T18:49:25.231909+0000 mon.a (mon.0) 660 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1364179900"}]': finished 2026-03-09T18:49:26.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:26 vm08 bash[46122]: audit 2026-03-09T18:49:25.231909+0000 mon.a (mon.0) 660 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1364179900"}]': finished 2026-03-09T18:49:26.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:26 vm08 bash[46122]: cluster 2026-03-09T18:49:25.238740+0000 mon.a (mon.0) 661 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-09T18:49:26.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:26 vm08 bash[46122]: cluster 2026-03-09T18:49:25.238740+0000 mon.a (mon.0) 661 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-09T18:49:26.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:26 vm08 bash[46122]: audit 2026-03-09T18:49:25.443937+0000 mon.c (mon.1) 461 : audit [INF] from='client.? 192.168.123.100:0/877565247' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/4136601387"}]: dispatch 2026-03-09T18:49:26.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:26 vm08 bash[46122]: audit 2026-03-09T18:49:25.443937+0000 mon.c (mon.1) 461 : audit [INF] from='client.? 192.168.123.100:0/877565247' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/4136601387"}]: dispatch 2026-03-09T18:49:26.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:26 vm08 bash[46122]: audit 2026-03-09T18:49:25.444324+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/4136601387"}]: dispatch 2026-03-09T18:49:26.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:26 vm08 bash[46122]: audit 2026-03-09T18:49:25.444324+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/4136601387"}]: dispatch 2026-03-09T18:49:27.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:27 vm00 bash[69512]: cluster 2026-03-09T18:49:26.122183+0000 mgr.y (mgr.44107) 432 : cluster [DBG] pgmap v221: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:27.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:27 vm00 bash[69512]: cluster 2026-03-09T18:49:26.122183+0000 mgr.y (mgr.44107) 432 : cluster [DBG] pgmap v221: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:27.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:27 vm00 bash[69512]: audit 2026-03-09T18:49:26.255909+0000 mon.a (mon.0) 663 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/4136601387"}]': finished 2026-03-09T18:49:27.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:27 vm00 bash[69512]: audit 2026-03-09T18:49:26.255909+0000 mon.a (mon.0) 663 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/4136601387"}]': finished 2026-03-09T18:49:27.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:27 vm00 bash[69512]: cluster 2026-03-09T18:49:26.261135+0000 mon.a (mon.0) 664 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-09T18:49:27.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:27 vm00 bash[69512]: cluster 2026-03-09T18:49:26.261135+0000 mon.a (mon.0) 664 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-09T18:49:27.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:27 vm00 bash[69512]: audit 2026-03-09T18:49:26.424646+0000 mon.c (mon.1) 462 : audit [INF] from='client.? 192.168.123.100:0/3139301327' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/4136601387"}]: dispatch 2026-03-09T18:49:27.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:27 vm00 bash[69512]: audit 2026-03-09T18:49:26.424646+0000 mon.c (mon.1) 462 : audit [INF] from='client.? 192.168.123.100:0/3139301327' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/4136601387"}]: dispatch 2026-03-09T18:49:27.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:27 vm00 bash[69512]: audit 2026-03-09T18:49:26.425035+0000 mon.a (mon.0) 665 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/4136601387"}]: dispatch 2026-03-09T18:49:27.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:27 vm00 bash[69512]: audit 2026-03-09T18:49:26.425035+0000 mon.a (mon.0) 665 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/4136601387"}]: dispatch 2026-03-09T18:49:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:27 vm00 bash[65531]: cluster 2026-03-09T18:49:26.122183+0000 mgr.y (mgr.44107) 432 : cluster [DBG] pgmap v221: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:27 vm00 bash[65531]: cluster 2026-03-09T18:49:26.122183+0000 mgr.y (mgr.44107) 432 : cluster [DBG] pgmap v221: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:27 vm00 bash[65531]: audit 2026-03-09T18:49:26.255909+0000 mon.a (mon.0) 663 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/4136601387"}]': finished 2026-03-09T18:49:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:27 vm00 bash[65531]: audit 2026-03-09T18:49:26.255909+0000 mon.a (mon.0) 663 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/4136601387"}]': finished 2026-03-09T18:49:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:27 vm00 bash[65531]: cluster 2026-03-09T18:49:26.261135+0000 mon.a (mon.0) 664 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-09T18:49:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:27 vm00 bash[65531]: cluster 2026-03-09T18:49:26.261135+0000 mon.a (mon.0) 664 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-09T18:49:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:27 vm00 bash[65531]: audit 2026-03-09T18:49:26.424646+0000 mon.c (mon.1) 462 : audit [INF] from='client.? 192.168.123.100:0/3139301327' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/4136601387"}]: dispatch 2026-03-09T18:49:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:27 vm00 bash[65531]: audit 2026-03-09T18:49:26.424646+0000 mon.c (mon.1) 462 : audit [INF] from='client.? 192.168.123.100:0/3139301327' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/4136601387"}]: dispatch 2026-03-09T18:49:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:27 vm00 bash[65531]: audit 2026-03-09T18:49:26.425035+0000 mon.a (mon.0) 665 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/4136601387"}]: dispatch 2026-03-09T18:49:27.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:27 vm00 bash[65531]: audit 2026-03-09T18:49:26.425035+0000 mon.a (mon.0) 665 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/4136601387"}]: dispatch 2026-03-09T18:49:27.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:27 vm08 bash[46122]: cluster 2026-03-09T18:49:26.122183+0000 mgr.y (mgr.44107) 432 : cluster [DBG] pgmap v221: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:27.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:27 vm08 bash[46122]: cluster 2026-03-09T18:49:26.122183+0000 mgr.y (mgr.44107) 432 : cluster [DBG] pgmap v221: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:27.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:27 vm08 bash[46122]: audit 2026-03-09T18:49:26.255909+0000 mon.a (mon.0) 663 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/4136601387"}]': finished 2026-03-09T18:49:27.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:27 vm08 bash[46122]: audit 2026-03-09T18:49:26.255909+0000 mon.a (mon.0) 663 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6801/4136601387"}]': finished 2026-03-09T18:49:27.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:27 vm08 bash[46122]: cluster 2026-03-09T18:49:26.261135+0000 mon.a (mon.0) 664 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-09T18:49:27.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:27 vm08 bash[46122]: cluster 2026-03-09T18:49:26.261135+0000 mon.a (mon.0) 664 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-09T18:49:27.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:27 vm08 bash[46122]: audit 2026-03-09T18:49:26.424646+0000 mon.c (mon.1) 462 : audit [INF] from='client.? 192.168.123.100:0/3139301327' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/4136601387"}]: dispatch 2026-03-09T18:49:27.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:27 vm08 bash[46122]: audit 2026-03-09T18:49:26.424646+0000 mon.c (mon.1) 462 : audit [INF] from='client.? 192.168.123.100:0/3139301327' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/4136601387"}]: dispatch 2026-03-09T18:49:27.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:27 vm08 bash[46122]: audit 2026-03-09T18:49:26.425035+0000 mon.a (mon.0) 665 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/4136601387"}]: dispatch 2026-03-09T18:49:27.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:27 vm08 bash[46122]: audit 2026-03-09T18:49:26.425035+0000 mon.a (mon.0) 665 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/4136601387"}]: dispatch 2026-03-09T18:49:28.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:28 vm00 bash[69512]: audit 2026-03-09T18:49:27.265690+0000 mon.a (mon.0) 666 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/4136601387"}]': finished 2026-03-09T18:49:28.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:28 vm00 bash[69512]: audit 2026-03-09T18:49:27.265690+0000 mon.a (mon.0) 666 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/4136601387"}]': finished 2026-03-09T18:49:28.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:28 vm00 bash[69512]: cluster 2026-03-09T18:49:27.274641+0000 mon.a (mon.0) 667 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-09T18:49:28.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:28 vm00 bash[69512]: cluster 2026-03-09T18:49:27.274641+0000 mon.a (mon.0) 667 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-09T18:49:28.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:28 vm00 bash[69512]: audit 2026-03-09T18:49:27.426509+0000 mon.c (mon.1) 463 : audit [INF] from='client.? 192.168.123.100:0/3826321543' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2915902676"}]: dispatch 2026-03-09T18:49:28.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:28 vm00 bash[69512]: audit 2026-03-09T18:49:27.426509+0000 mon.c (mon.1) 463 : audit [INF] from='client.? 192.168.123.100:0/3826321543' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2915902676"}]: dispatch 2026-03-09T18:49:28.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:28 vm00 bash[69512]: audit 2026-03-09T18:49:27.426915+0000 mon.a (mon.0) 668 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2915902676"}]: dispatch 2026-03-09T18:49:28.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:28 vm00 bash[69512]: audit 2026-03-09T18:49:27.426915+0000 mon.a (mon.0) 668 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2915902676"}]: dispatch 2026-03-09T18:49:28.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:28 vm00 bash[65531]: audit 2026-03-09T18:49:27.265690+0000 mon.a (mon.0) 666 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/4136601387"}]': finished 2026-03-09T18:49:28.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:28 vm00 bash[65531]: audit 2026-03-09T18:49:27.265690+0000 mon.a (mon.0) 666 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/4136601387"}]': finished 2026-03-09T18:49:28.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:28 vm00 bash[65531]: cluster 2026-03-09T18:49:27.274641+0000 mon.a (mon.0) 667 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-09T18:49:28.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:28 vm00 bash[65531]: cluster 2026-03-09T18:49:27.274641+0000 mon.a (mon.0) 667 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-09T18:49:28.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:28 vm00 bash[65531]: audit 2026-03-09T18:49:27.426509+0000 mon.c (mon.1) 463 : audit [INF] from='client.? 192.168.123.100:0/3826321543' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2915902676"}]: dispatch 2026-03-09T18:49:28.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:28 vm00 bash[65531]: audit 2026-03-09T18:49:27.426509+0000 mon.c (mon.1) 463 : audit [INF] from='client.? 192.168.123.100:0/3826321543' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2915902676"}]: dispatch 2026-03-09T18:49:28.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:28 vm00 bash[65531]: audit 2026-03-09T18:49:27.426915+0000 mon.a (mon.0) 668 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2915902676"}]: dispatch 2026-03-09T18:49:28.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:28 vm00 bash[65531]: audit 2026-03-09T18:49:27.426915+0000 mon.a (mon.0) 668 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2915902676"}]: dispatch 2026-03-09T18:49:28.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:28 vm08 bash[46122]: audit 2026-03-09T18:49:27.265690+0000 mon.a (mon.0) 666 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/4136601387"}]': finished 2026-03-09T18:49:28.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:28 vm08 bash[46122]: audit 2026-03-09T18:49:27.265690+0000 mon.a (mon.0) 666 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:6800/4136601387"}]': finished 2026-03-09T18:49:28.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:28 vm08 bash[46122]: cluster 2026-03-09T18:49:27.274641+0000 mon.a (mon.0) 667 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-09T18:49:28.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:28 vm08 bash[46122]: cluster 2026-03-09T18:49:27.274641+0000 mon.a (mon.0) 667 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-09T18:49:28.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:28 vm08 bash[46122]: audit 2026-03-09T18:49:27.426509+0000 mon.c (mon.1) 463 : audit [INF] from='client.? 192.168.123.100:0/3826321543' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2915902676"}]: dispatch 2026-03-09T18:49:28.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:28 vm08 bash[46122]: audit 2026-03-09T18:49:27.426509+0000 mon.c (mon.1) 463 : audit [INF] from='client.? 192.168.123.100:0/3826321543' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2915902676"}]: dispatch 2026-03-09T18:49:28.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:28 vm08 bash[46122]: audit 2026-03-09T18:49:27.426915+0000 mon.a (mon.0) 668 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2915902676"}]: dispatch 2026-03-09T18:49:28.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:28 vm08 bash[46122]: audit 2026-03-09T18:49:27.426915+0000 mon.a (mon.0) 668 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2915902676"}]: dispatch 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:29 vm00 bash[69512]: cluster 2026-03-09T18:49:28.122489+0000 mgr.y (mgr.44107) 433 : cluster [DBG] pgmap v224: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:29 vm00 bash[69512]: cluster 2026-03-09T18:49:28.122489+0000 mgr.y (mgr.44107) 433 : cluster [DBG] pgmap v224: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:29 vm00 bash[69512]: audit 2026-03-09T18:49:28.282896+0000 mon.a (mon.0) 669 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2915902676"}]': finished 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:29 vm00 bash[69512]: audit 2026-03-09T18:49:28.282896+0000 mon.a (mon.0) 669 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2915902676"}]': finished 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:29 vm00 bash[69512]: cluster 2026-03-09T18:49:28.292513+0000 mon.a (mon.0) 670 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:29 vm00 bash[69512]: cluster 2026-03-09T18:49:28.292513+0000 mon.a (mon.0) 670 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:29 vm00 bash[69512]: audit 2026-03-09T18:49:28.498207+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:29 vm00 bash[69512]: audit 2026-03-09T18:49:28.498207+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:29 vm00 bash[69512]: audit 2026-03-09T18:49:28.505121+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:29 vm00 bash[69512]: audit 2026-03-09T18:49:28.505121+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:29 vm00 bash[69512]: audit 2026-03-09T18:49:28.521178+0000 mon.a (mon.0) 673 : audit [INF] from='client.? 192.168.123.100:0/2238321419' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1065894024"}]: dispatch 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:29 vm00 bash[69512]: audit 2026-03-09T18:49:28.521178+0000 mon.a (mon.0) 673 : audit [INF] from='client.? 192.168.123.100:0/2238321419' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1065894024"}]: dispatch 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:29 vm00 bash[69512]: audit 2026-03-09T18:49:29.032122+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:29 vm00 bash[69512]: audit 2026-03-09T18:49:29.032122+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:29 vm00 bash[69512]: audit 2026-03-09T18:49:29.038961+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:29 vm00 bash[69512]: audit 2026-03-09T18:49:29.038961+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:29 vm00 bash[65531]: cluster 2026-03-09T18:49:28.122489+0000 mgr.y (mgr.44107) 433 : cluster [DBG] pgmap v224: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:29 vm00 bash[65531]: cluster 2026-03-09T18:49:28.122489+0000 mgr.y (mgr.44107) 433 : cluster [DBG] pgmap v224: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:29 vm00 bash[65531]: audit 2026-03-09T18:49:28.282896+0000 mon.a (mon.0) 669 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2915902676"}]': finished 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:29 vm00 bash[65531]: audit 2026-03-09T18:49:28.282896+0000 mon.a (mon.0) 669 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2915902676"}]': finished 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:29 vm00 bash[65531]: cluster 2026-03-09T18:49:28.292513+0000 mon.a (mon.0) 670 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:29 vm00 bash[65531]: cluster 2026-03-09T18:49:28.292513+0000 mon.a (mon.0) 670 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:29 vm00 bash[65531]: audit 2026-03-09T18:49:28.498207+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:29 vm00 bash[65531]: audit 2026-03-09T18:49:28.498207+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:29 vm00 bash[65531]: audit 2026-03-09T18:49:28.505121+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:29 vm00 bash[65531]: audit 2026-03-09T18:49:28.505121+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:29 vm00 bash[65531]: audit 2026-03-09T18:49:28.521178+0000 mon.a (mon.0) 673 : audit [INF] from='client.? 192.168.123.100:0/2238321419' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1065894024"}]: dispatch 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:29 vm00 bash[65531]: audit 2026-03-09T18:49:28.521178+0000 mon.a (mon.0) 673 : audit [INF] from='client.? 192.168.123.100:0/2238321419' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1065894024"}]: dispatch 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:29 vm00 bash[65531]: audit 2026-03-09T18:49:29.032122+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:29 vm00 bash[65531]: audit 2026-03-09T18:49:29.032122+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:29 vm00 bash[65531]: audit 2026-03-09T18:49:29.038961+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:29 vm00 bash[65531]: audit 2026-03-09T18:49:29.038961+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:29.629 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:49:29 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:49:29] "GET /metrics HTTP/1.1" 200 37990 "" "Prometheus/2.51.0" 2026-03-09T18:49:29.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:29 vm08 bash[46122]: cluster 2026-03-09T18:49:28.122489+0000 mgr.y (mgr.44107) 433 : cluster [DBG] pgmap v224: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:29.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:29 vm08 bash[46122]: cluster 2026-03-09T18:49:28.122489+0000 mgr.y (mgr.44107) 433 : cluster [DBG] pgmap v224: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:29.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:29 vm08 bash[46122]: audit 2026-03-09T18:49:28.282896+0000 mon.a (mon.0) 669 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2915902676"}]': finished 2026-03-09T18:49:29.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:29 vm08 bash[46122]: audit 2026-03-09T18:49:28.282896+0000 mon.a (mon.0) 669 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/2915902676"}]': finished 2026-03-09T18:49:29.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:29 vm08 bash[46122]: cluster 2026-03-09T18:49:28.292513+0000 mon.a (mon.0) 670 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-09T18:49:29.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:29 vm08 bash[46122]: cluster 2026-03-09T18:49:28.292513+0000 mon.a (mon.0) 670 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-09T18:49:29.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:29 vm08 bash[46122]: audit 2026-03-09T18:49:28.498207+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:29.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:29 vm08 bash[46122]: audit 2026-03-09T18:49:28.498207+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:29.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:29 vm08 bash[46122]: audit 2026-03-09T18:49:28.505121+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:29.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:29 vm08 bash[46122]: audit 2026-03-09T18:49:28.505121+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:29.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:29 vm08 bash[46122]: audit 2026-03-09T18:49:28.521178+0000 mon.a (mon.0) 673 : audit [INF] from='client.? 192.168.123.100:0/2238321419' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1065894024"}]: dispatch 2026-03-09T18:49:29.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:29 vm08 bash[46122]: audit 2026-03-09T18:49:28.521178+0000 mon.a (mon.0) 673 : audit [INF] from='client.? 192.168.123.100:0/2238321419' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1065894024"}]: dispatch 2026-03-09T18:49:29.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:29 vm08 bash[46122]: audit 2026-03-09T18:49:29.032122+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:29.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:29 vm08 bash[46122]: audit 2026-03-09T18:49:29.032122+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:29.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:29 vm08 bash[46122]: audit 2026-03-09T18:49:29.038961+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:29.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:29 vm08 bash[46122]: audit 2026-03-09T18:49:29.038961+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:30.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:30 vm00 bash[69512]: audit 2026-03-09T18:49:29.506793+0000 mon.a (mon.0) 676 : audit [INF] from='client.? 192.168.123.100:0/2238321419' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1065894024"}]': finished 2026-03-09T18:49:30.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:30 vm00 bash[69512]: audit 2026-03-09T18:49:29.506793+0000 mon.a (mon.0) 676 : audit [INF] from='client.? 192.168.123.100:0/2238321419' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1065894024"}]': finished 2026-03-09T18:49:30.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:30 vm00 bash[69512]: cluster 2026-03-09T18:49:29.518494+0000 mon.a (mon.0) 677 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-09T18:49:30.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:30 vm00 bash[69512]: cluster 2026-03-09T18:49:29.518494+0000 mon.a (mon.0) 677 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-09T18:49:30.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:30 vm00 bash[69512]: cluster 2026-03-09T18:49:30.122793+0000 mgr.y (mgr.44107) 434 : cluster [DBG] pgmap v227: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:49:30.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:30 vm00 bash[69512]: cluster 2026-03-09T18:49:30.122793+0000 mgr.y (mgr.44107) 434 : cluster [DBG] pgmap v227: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:49:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:30 vm00 bash[65531]: audit 2026-03-09T18:49:29.506793+0000 mon.a (mon.0) 676 : audit [INF] from='client.? 192.168.123.100:0/2238321419' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1065894024"}]': finished 2026-03-09T18:49:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:30 vm00 bash[65531]: audit 2026-03-09T18:49:29.506793+0000 mon.a (mon.0) 676 : audit [INF] from='client.? 192.168.123.100:0/2238321419' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1065894024"}]': finished 2026-03-09T18:49:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:30 vm00 bash[65531]: cluster 2026-03-09T18:49:29.518494+0000 mon.a (mon.0) 677 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-09T18:49:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:30 vm00 bash[65531]: cluster 2026-03-09T18:49:29.518494+0000 mon.a (mon.0) 677 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-09T18:49:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:30 vm00 bash[65531]: cluster 2026-03-09T18:49:30.122793+0000 mgr.y (mgr.44107) 434 : cluster [DBG] pgmap v227: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:49:30.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:30 vm00 bash[65531]: cluster 2026-03-09T18:49:30.122793+0000 mgr.y (mgr.44107) 434 : cluster [DBG] pgmap v227: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:49:30.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:30 vm08 bash[46122]: audit 2026-03-09T18:49:29.506793+0000 mon.a (mon.0) 676 : audit [INF] from='client.? 192.168.123.100:0/2238321419' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1065894024"}]': finished 2026-03-09T18:49:30.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:30 vm08 bash[46122]: audit 2026-03-09T18:49:29.506793+0000 mon.a (mon.0) 676 : audit [INF] from='client.? 192.168.123.100:0/2238321419' entity='client.iscsi.foo.vm00.ywhulq' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.100:0/1065894024"}]': finished 2026-03-09T18:49:30.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:30 vm08 bash[46122]: cluster 2026-03-09T18:49:29.518494+0000 mon.a (mon.0) 677 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-09T18:49:30.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:30 vm08 bash[46122]: cluster 2026-03-09T18:49:29.518494+0000 mon.a (mon.0) 677 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-09T18:49:30.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:30 vm08 bash[46122]: cluster 2026-03-09T18:49:30.122793+0000 mgr.y (mgr.44107) 434 : cluster [DBG] pgmap v227: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:49:30.974 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:30 vm08 bash[46122]: cluster 2026-03-09T18:49:30.122793+0000 mgr.y (mgr.44107) 434 : cluster [DBG] pgmap v227: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail 2026-03-09T18:49:33.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:33 vm08 bash[46122]: cluster 2026-03-09T18:49:32.123189+0000 mgr.y (mgr.44107) 435 : cluster [DBG] pgmap v228: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 872 B/s rd, 0 op/s 2026-03-09T18:49:33.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:33 vm08 bash[46122]: cluster 2026-03-09T18:49:32.123189+0000 mgr.y (mgr.44107) 435 : cluster [DBG] pgmap v228: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 872 B/s rd, 0 op/s 2026-03-09T18:49:33.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:33 vm08 bash[46122]: audit 2026-03-09T18:49:33.103120+0000 mon.c (mon.1) 464 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:49:33.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:33 vm08 bash[46122]: audit 2026-03-09T18:49:33.103120+0000 mon.c (mon.1) 464 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:49:33.497 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:33 vm00 bash[69512]: cluster 2026-03-09T18:49:32.123189+0000 mgr.y (mgr.44107) 435 : cluster [DBG] pgmap v228: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 872 B/s rd, 0 op/s 2026-03-09T18:49:33.498 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:33 vm00 bash[69512]: cluster 2026-03-09T18:49:32.123189+0000 mgr.y (mgr.44107) 435 : cluster [DBG] pgmap v228: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 872 B/s rd, 0 op/s 2026-03-09T18:49:33.498 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:33 vm00 bash[69512]: audit 2026-03-09T18:49:33.103120+0000 mon.c (mon.1) 464 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:49:33.498 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:33 vm00 bash[69512]: audit 2026-03-09T18:49:33.103120+0000 mon.c (mon.1) 464 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:49:33.498 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:33 vm00 bash[65531]: cluster 2026-03-09T18:49:32.123189+0000 mgr.y (mgr.44107) 435 : cluster [DBG] pgmap v228: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 872 B/s rd, 0 op/s 2026-03-09T18:49:33.498 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:33 vm00 bash[65531]: cluster 2026-03-09T18:49:32.123189+0000 mgr.y (mgr.44107) 435 : cluster [DBG] pgmap v228: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 872 B/s rd, 0 op/s 2026-03-09T18:49:33.498 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:33 vm00 bash[65531]: audit 2026-03-09T18:49:33.103120+0000 mon.c (mon.1) 464 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:49:33.498 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:33 vm00 bash[65531]: audit 2026-03-09T18:49:33.103120+0000 mon.c (mon.1) 464 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:49:35.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:33.501544+0000 mgr.y (mgr.44107) 436 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:35.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:33.501544+0000 mgr.y (mgr.44107) 436 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:35.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: cluster 2026-03-09T18:49:34.123532+0000 mgr.y (mgr.44107) 437 : cluster [DBG] pgmap v229: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 746 B/s rd, 0 op/s 2026-03-09T18:49:35.475 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: cluster 2026-03-09T18:49:34.123532+0000 mgr.y (mgr.44107) 437 : cluster [DBG] pgmap v229: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 746 B/s rd, 0 op/s 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.554605+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.554605+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.559929+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.559929+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.561262+0000 mon.c (mon.1) 465 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.561262+0000 mon.c (mon.1) 465 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.561704+0000 mon.c (mon.1) 466 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.561704+0000 mon.c (mon.1) 466 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.566192+0000 mon.a (mon.0) 680 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.566192+0000 mon.a (mon.0) 680 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.579240+0000 mon.c (mon.1) 467 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.579240+0000 mon.c (mon.1) 467 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.587225+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.587225+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.590069+0000 mon.c (mon.1) 468 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.590069+0000 mon.c (mon.1) 468 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.591205+0000 mon.c (mon.1) 469 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.591205+0000 mon.c (mon.1) 469 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.594480+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.594480+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.624497+0000 mon.c (mon.1) 470 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.624497+0000 mon.c (mon.1) 470 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.626281+0000 mon.c (mon.1) 471 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.626281+0000 mon.c (mon.1) 471 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.627697+0000 mon.c (mon.1) 472 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.627697+0000 mon.c (mon.1) 472 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.628792+0000 mon.c (mon.1) 473 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.628792+0000 mon.c (mon.1) 473 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.630858+0000 mon.c (mon.1) 474 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.630858+0000 mon.c (mon.1) 474 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.632571+0000 mon.c (mon.1) 475 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.632571+0000 mon.c (mon.1) 475 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.633818+0000 mon.c (mon.1) 476 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.633818+0000 mon.c (mon.1) 476 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.634928+0000 mon.c (mon.1) 477 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.634928+0000 mon.c (mon.1) 477 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.636010+0000 mon.c (mon.1) 478 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.636010+0000 mon.c (mon.1) 478 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.637088+0000 mon.c (mon.1) 479 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.637088+0000 mon.c (mon.1) 479 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.638251+0000 mon.c (mon.1) 480 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.638251+0000 mon.c (mon.1) 480 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.642704+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.642704+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.645551+0000 mon.c (mon.1) 481 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm00.ywhulq"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.645551+0000 mon.c (mon.1) 481 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm00.ywhulq"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.645767+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm00.ywhulq"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.645767+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm00.ywhulq"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.648542+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm00.ywhulq"}]': finished 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.648542+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm00.ywhulq"}]': finished 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.653034+0000 mon.c (mon.1) 482 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.653034+0000 mon.c (mon.1) 482 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.656960+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.656960+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.661375+0000 mon.c (mon.1) 483 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.661375+0000 mon.c (mon.1) 483 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.665565+0000 mon.a (mon.0) 687 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.665565+0000 mon.a (mon.0) 687 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.669459+0000 mon.c (mon.1) 484 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.669459+0000 mon.c (mon.1) 484 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.476 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.670822+0000 mon.c (mon.1) 485 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.670822+0000 mon.c (mon.1) 485 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.672138+0000 mon.c (mon.1) 486 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.672138+0000 mon.c (mon.1) 486 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.673425+0000 mon.c (mon.1) 487 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.673425+0000 mon.c (mon.1) 487 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.674567+0000 mon.c (mon.1) 488 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.674567+0000 mon.c (mon.1) 488 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.675710+0000 mon.c (mon.1) 489 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.675710+0000 mon.c (mon.1) 489 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.677617+0000 mon.c (mon.1) 490 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.677617+0000 mon.c (mon.1) 490 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.677816+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.677816+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.680910+0000 mon.a (mon.0) 689 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.680910+0000 mon.a (mon.0) 689 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.683573+0000 mon.c (mon.1) 491 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.683573+0000 mon.c (mon.1) 491 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.683778+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.683778+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.686549+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.686549+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.689120+0000 mon.c (mon.1) 492 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.689120+0000 mon.c (mon.1) 492 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.689335+0000 mon.a (mon.0) 692 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.689335+0000 mon.a (mon.0) 692 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.691936+0000 mon.a (mon.0) 693 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.691936+0000 mon.a (mon.0) 693 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.694524+0000 mon.c (mon.1) 493 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.694524+0000 mon.c (mon.1) 493 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.694727+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.694727+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.697225+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.697225+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.699996+0000 mon.c (mon.1) 494 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.699996+0000 mon.c (mon.1) 494 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.700191+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.700191+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.702678+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.702678+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.705778+0000 mon.c (mon.1) 495 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.705778+0000 mon.c (mon.1) 495 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.705917+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.705917+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.708496+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.708496+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.710033+0000 mon.c (mon.1) 496 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.710033+0000 mon.c (mon.1) 496 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.710204+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.710204+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.712489+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.712489+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.714115+0000 mon.c (mon.1) 497 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.714115+0000 mon.c (mon.1) 497 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.714245+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.714245+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.714912+0000 mon.c (mon.1) 498 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.714912+0000 mon.c (mon.1) 498 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.715038+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.715038+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.717377+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.717377+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:49:35.477 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.719638+0000 mon.c (mon.1) 499 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.719638+0000 mon.c (mon.1) 499 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.719759+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.719759+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.722115+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.722115+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.724150+0000 mon.c (mon.1) 500 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.724150+0000 mon.c (mon.1) 500 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.724282+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.724282+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.726602+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.726602+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.728808+0000 mon.c (mon.1) 501 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.728808+0000 mon.c (mon.1) 501 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.728940+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.728940+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.731395+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.731395+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.732918+0000 mon.c (mon.1) 502 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.732918+0000 mon.c (mon.1) 502 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.733052+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.733052+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.733698+0000 mon.c (mon.1) 503 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.733698+0000 mon.c (mon.1) 503 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.733816+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.733816+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.734398+0000 mon.c (mon.1) 504 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.734398+0000 mon.c (mon.1) 504 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.734528+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.734528+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.735183+0000 mon.c (mon.1) 505 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.735183+0000 mon.c (mon.1) 505 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.735331+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.735331+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.735933+0000 mon.c (mon.1) 506 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.735933+0000 mon.c (mon.1) 506 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.736041+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.736041+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.736663+0000 mon.c (mon.1) 507 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.736663+0000 mon.c (mon.1) 507 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.736780+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.736780+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.737686+0000 mon.c (mon.1) 508 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.737686+0000 mon.c (mon.1) 508 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.737795+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.737795+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.740376+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.740376+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.741261+0000 mon.c (mon.1) 509 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.741261+0000 mon.c (mon.1) 509 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.742303+0000 mon.c (mon.1) 510 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.742303+0000 mon.c (mon.1) 510 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.742896+0000 mon.c (mon.1) 511 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.742896+0000 mon.c (mon.1) 511 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:49:35.478 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.747149+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.479 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.747149+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.479 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.786425+0000 mon.c (mon.1) 512 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:35.479 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.786425+0000 mon.c (mon.1) 512 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:35.479 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.787568+0000 mon.c (mon.1) 513 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:35.479 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.787568+0000 mon.c (mon.1) 513 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:35.479 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.788084+0000 mon.c (mon.1) 514 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:49:35.479 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.788084+0000 mon.c (mon.1) 514 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:49:35.479 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.792939+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.479 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:35 vm08 bash[46122]: audit 2026-03-09T18:49:34.792939+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:33.501544+0000 mgr.y (mgr.44107) 436 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:35.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:33.501544+0000 mgr.y (mgr.44107) 436 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:35.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: cluster 2026-03-09T18:49:34.123532+0000 mgr.y (mgr.44107) 437 : cluster [DBG] pgmap v229: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 746 B/s rd, 0 op/s 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: cluster 2026-03-09T18:49:34.123532+0000 mgr.y (mgr.44107) 437 : cluster [DBG] pgmap v229: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 746 B/s rd, 0 op/s 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.554605+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.554605+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.559929+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.559929+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.561262+0000 mon.c (mon.1) 465 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.561262+0000 mon.c (mon.1) 465 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.561704+0000 mon.c (mon.1) 466 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.561704+0000 mon.c (mon.1) 466 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.566192+0000 mon.a (mon.0) 680 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.566192+0000 mon.a (mon.0) 680 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.579240+0000 mon.c (mon.1) 467 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.579240+0000 mon.c (mon.1) 467 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.587225+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.587225+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.590069+0000 mon.c (mon.1) 468 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.590069+0000 mon.c (mon.1) 468 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.591205+0000 mon.c (mon.1) 469 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.591205+0000 mon.c (mon.1) 469 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.594480+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.594480+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.624497+0000 mon.c (mon.1) 470 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.624497+0000 mon.c (mon.1) 470 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.626281+0000 mon.c (mon.1) 471 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.626281+0000 mon.c (mon.1) 471 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.627697+0000 mon.c (mon.1) 472 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.627697+0000 mon.c (mon.1) 472 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.628792+0000 mon.c (mon.1) 473 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.628792+0000 mon.c (mon.1) 473 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.630858+0000 mon.c (mon.1) 474 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.630858+0000 mon.c (mon.1) 474 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.632571+0000 mon.c (mon.1) 475 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.632571+0000 mon.c (mon.1) 475 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.633818+0000 mon.c (mon.1) 476 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.633818+0000 mon.c (mon.1) 476 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.634928+0000 mon.c (mon.1) 477 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.634928+0000 mon.c (mon.1) 477 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.636010+0000 mon.c (mon.1) 478 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.636010+0000 mon.c (mon.1) 478 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.637088+0000 mon.c (mon.1) 479 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.637088+0000 mon.c (mon.1) 479 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.638251+0000 mon.c (mon.1) 480 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.638251+0000 mon.c (mon.1) 480 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.642704+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.642704+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.645551+0000 mon.c (mon.1) 481 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm00.ywhulq"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.645551+0000 mon.c (mon.1) 481 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm00.ywhulq"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.645767+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm00.ywhulq"}]: dispatch 2026-03-09T18:49:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.645767+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm00.ywhulq"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.648542+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm00.ywhulq"}]': finished 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.648542+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm00.ywhulq"}]': finished 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.653034+0000 mon.c (mon.1) 482 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.653034+0000 mon.c (mon.1) 482 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.656960+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.656960+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.661375+0000 mon.c (mon.1) 483 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.661375+0000 mon.c (mon.1) 483 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.665565+0000 mon.a (mon.0) 687 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.665565+0000 mon.a (mon.0) 687 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.669459+0000 mon.c (mon.1) 484 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.669459+0000 mon.c (mon.1) 484 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.670822+0000 mon.c (mon.1) 485 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.670822+0000 mon.c (mon.1) 485 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.672138+0000 mon.c (mon.1) 486 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.672138+0000 mon.c (mon.1) 486 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.673425+0000 mon.c (mon.1) 487 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.673425+0000 mon.c (mon.1) 487 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.674567+0000 mon.c (mon.1) 488 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.674567+0000 mon.c (mon.1) 488 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.675710+0000 mon.c (mon.1) 489 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.675710+0000 mon.c (mon.1) 489 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.677617+0000 mon.c (mon.1) 490 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.677617+0000 mon.c (mon.1) 490 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.677816+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.677816+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.680910+0000 mon.a (mon.0) 689 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.680910+0000 mon.a (mon.0) 689 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.683573+0000 mon.c (mon.1) 491 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.683573+0000 mon.c (mon.1) 491 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.683778+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.683778+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.686549+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.686549+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.689120+0000 mon.c (mon.1) 492 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.689120+0000 mon.c (mon.1) 492 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.689335+0000 mon.a (mon.0) 692 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.689335+0000 mon.a (mon.0) 692 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.691936+0000 mon.a (mon.0) 693 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.691936+0000 mon.a (mon.0) 693 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.694524+0000 mon.c (mon.1) 493 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.694524+0000 mon.c (mon.1) 493 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.694727+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.694727+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.697225+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.697225+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.699996+0000 mon.c (mon.1) 494 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.699996+0000 mon.c (mon.1) 494 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.700191+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.700191+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.702678+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.702678+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.705778+0000 mon.c (mon.1) 495 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:49:35.630 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.705778+0000 mon.c (mon.1) 495 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.705917+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.705917+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.708496+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.708496+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.710033+0000 mon.c (mon.1) 496 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.710033+0000 mon.c (mon.1) 496 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.710204+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.710204+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.712489+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.712489+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.714115+0000 mon.c (mon.1) 497 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.714115+0000 mon.c (mon.1) 497 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.714245+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.714245+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.714912+0000 mon.c (mon.1) 498 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.714912+0000 mon.c (mon.1) 498 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.715038+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.715038+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.717377+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.717377+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.719638+0000 mon.c (mon.1) 499 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.719638+0000 mon.c (mon.1) 499 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.719759+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.719759+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.722115+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.722115+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.724150+0000 mon.c (mon.1) 500 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:33.501544+0000 mgr.y (mgr.44107) 436 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:33.501544+0000 mgr.y (mgr.44107) 436 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: cluster 2026-03-09T18:49:34.123532+0000 mgr.y (mgr.44107) 437 : cluster [DBG] pgmap v229: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 746 B/s rd, 0 op/s 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: cluster 2026-03-09T18:49:34.123532+0000 mgr.y (mgr.44107) 437 : cluster [DBG] pgmap v229: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 746 B/s rd, 0 op/s 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.554605+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.554605+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.559929+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.559929+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.561262+0000 mon.c (mon.1) 465 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.561262+0000 mon.c (mon.1) 465 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.561704+0000 mon.c (mon.1) 466 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.561704+0000 mon.c (mon.1) 466 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.566192+0000 mon.a (mon.0) 680 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.566192+0000 mon.a (mon.0) 680 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.579240+0000 mon.c (mon.1) 467 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.579240+0000 mon.c (mon.1) 467 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.587225+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.587225+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.590069+0000 mon.c (mon.1) 468 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.590069+0000 mon.c (mon.1) 468 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.591205+0000 mon.c (mon.1) 469 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.591205+0000 mon.c (mon.1) 469 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:49:35.631 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.594480+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.594480+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.624497+0000 mon.c (mon.1) 470 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.624497+0000 mon.c (mon.1) 470 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.626281+0000 mon.c (mon.1) 471 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.626281+0000 mon.c (mon.1) 471 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.627697+0000 mon.c (mon.1) 472 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.627697+0000 mon.c (mon.1) 472 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.628792+0000 mon.c (mon.1) 473 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.628792+0000 mon.c (mon.1) 473 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.630858+0000 mon.c (mon.1) 474 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.630858+0000 mon.c (mon.1) 474 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.632571+0000 mon.c (mon.1) 475 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.632571+0000 mon.c (mon.1) 475 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.633818+0000 mon.c (mon.1) 476 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.633818+0000 mon.c (mon.1) 476 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.634928+0000 mon.c (mon.1) 477 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.634928+0000 mon.c (mon.1) 477 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.636010+0000 mon.c (mon.1) 478 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.636010+0000 mon.c (mon.1) 478 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.637088+0000 mon.c (mon.1) 479 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.637088+0000 mon.c (mon.1) 479 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.638251+0000 mon.c (mon.1) 480 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.638251+0000 mon.c (mon.1) 480 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.642704+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.642704+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.645551+0000 mon.c (mon.1) 481 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm00.ywhulq"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.645551+0000 mon.c (mon.1) 481 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm00.ywhulq"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.645767+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm00.ywhulq"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.645767+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm00.ywhulq"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.648542+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm00.ywhulq"}]': finished 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.648542+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm00.ywhulq"}]': finished 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.653034+0000 mon.c (mon.1) 482 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.653034+0000 mon.c (mon.1) 482 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.656960+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.656960+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.661375+0000 mon.c (mon.1) 483 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.661375+0000 mon.c (mon.1) 483 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.665565+0000 mon.a (mon.0) 687 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.665565+0000 mon.a (mon.0) 687 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.669459+0000 mon.c (mon.1) 484 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.669459+0000 mon.c (mon.1) 484 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.670822+0000 mon.c (mon.1) 485 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.670822+0000 mon.c (mon.1) 485 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.672138+0000 mon.c (mon.1) 486 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.672138+0000 mon.c (mon.1) 486 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.673425+0000 mon.c (mon.1) 487 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.673425+0000 mon.c (mon.1) 487 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.674567+0000 mon.c (mon.1) 488 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.674567+0000 mon.c (mon.1) 488 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.675710+0000 mon.c (mon.1) 489 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.675710+0000 mon.c (mon.1) 489 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.677617+0000 mon.c (mon.1) 490 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.677617+0000 mon.c (mon.1) 490 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.677816+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.677816+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.680910+0000 mon.a (mon.0) 689 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.680910+0000 mon.a (mon.0) 689 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.683573+0000 mon.c (mon.1) 491 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.683573+0000 mon.c (mon.1) 491 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.632 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.683778+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.683778+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.686549+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.686549+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.689120+0000 mon.c (mon.1) 492 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.689120+0000 mon.c (mon.1) 492 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.689335+0000 mon.a (mon.0) 692 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.689335+0000 mon.a (mon.0) 692 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.691936+0000 mon.a (mon.0) 693 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.691936+0000 mon.a (mon.0) 693 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.694524+0000 mon.c (mon.1) 493 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.694524+0000 mon.c (mon.1) 493 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.694727+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.694727+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.697225+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.697225+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.699996+0000 mon.c (mon.1) 494 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.699996+0000 mon.c (mon.1) 494 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.700191+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.700191+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.702678+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.702678+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.705778+0000 mon.c (mon.1) 495 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.705778+0000 mon.c (mon.1) 495 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.705917+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.705917+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.708496+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.708496+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.710033+0000 mon.c (mon.1) 496 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.710033+0000 mon.c (mon.1) 496 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.710204+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.710204+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.712489+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.712489+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.714115+0000 mon.c (mon.1) 497 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.714115+0000 mon.c (mon.1) 497 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.714245+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.714245+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.714912+0000 mon.c (mon.1) 498 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.714912+0000 mon.c (mon.1) 498 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.715038+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.715038+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.717377+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.717377+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.719638+0000 mon.c (mon.1) 499 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.719638+0000 mon.c (mon.1) 499 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.719759+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.719759+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.722115+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.722115+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.724150+0000 mon.c (mon.1) 500 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.724150+0000 mon.c (mon.1) 500 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.724282+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.724282+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.726602+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.726602+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.728808+0000 mon.c (mon.1) 501 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.728808+0000 mon.c (mon.1) 501 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.728940+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.728940+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.731395+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.731395+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:49:35.633 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.732918+0000 mon.c (mon.1) 502 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.732918+0000 mon.c (mon.1) 502 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.733052+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.733052+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.733698+0000 mon.c (mon.1) 503 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.733698+0000 mon.c (mon.1) 503 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.724150+0000 mon.c (mon.1) 500 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.724282+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.724282+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.726602+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.726602+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.728808+0000 mon.c (mon.1) 501 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.728808+0000 mon.c (mon.1) 501 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.728940+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.728940+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.731395+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.731395+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.732918+0000 mon.c (mon.1) 502 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.732918+0000 mon.c (mon.1) 502 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.733052+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.733052+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.733698+0000 mon.c (mon.1) 503 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.733698+0000 mon.c (mon.1) 503 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.733816+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.733816+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.734398+0000 mon.c (mon.1) 504 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.734398+0000 mon.c (mon.1) 504 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.734528+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.734528+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.735183+0000 mon.c (mon.1) 505 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.735183+0000 mon.c (mon.1) 505 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.735331+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.735331+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.735933+0000 mon.c (mon.1) 506 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.735933+0000 mon.c (mon.1) 506 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.736041+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.736041+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.736663+0000 mon.c (mon.1) 507 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.736663+0000 mon.c (mon.1) 507 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.736780+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.736780+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.737686+0000 mon.c (mon.1) 508 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.737686+0000 mon.c (mon.1) 508 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.737795+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.737795+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.740376+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.740376+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.741261+0000 mon.c (mon.1) 509 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.741261+0000 mon.c (mon.1) 509 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.742303+0000 mon.c (mon.1) 510 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.742303+0000 mon.c (mon.1) 510 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.742896+0000 mon.c (mon.1) 511 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.742896+0000 mon.c (mon.1) 511 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.747149+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.747149+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.786425+0000 mon.c (mon.1) 512 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:35.634 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.786425+0000 mon.c (mon.1) 512 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.787568+0000 mon.c (mon.1) 513 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.787568+0000 mon.c (mon.1) 513 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.788084+0000 mon.c (mon.1) 514 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.788084+0000 mon.c (mon.1) 514 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.792939+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:35 vm00 bash[65531]: audit 2026-03-09T18:49:34.792939+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.733816+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.733816+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.734398+0000 mon.c (mon.1) 504 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.734398+0000 mon.c (mon.1) 504 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.734528+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.734528+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.735183+0000 mon.c (mon.1) 505 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.735183+0000 mon.c (mon.1) 505 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.735331+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.735331+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.735933+0000 mon.c (mon.1) 506 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.735933+0000 mon.c (mon.1) 506 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.736041+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.736041+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.736663+0000 mon.c (mon.1) 507 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.736663+0000 mon.c (mon.1) 507 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.736780+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.736780+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.737686+0000 mon.c (mon.1) 508 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.737686+0000 mon.c (mon.1) 508 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.737795+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.737795+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.740376+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.740376+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.741261+0000 mon.c (mon.1) 509 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.741261+0000 mon.c (mon.1) 509 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.742303+0000 mon.c (mon.1) 510 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.742303+0000 mon.c (mon.1) 510 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.742896+0000 mon.c (mon.1) 511 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.742896+0000 mon.c (mon.1) 511 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.747149+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.747149+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.786425+0000 mon.c (mon.1) 512 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.786425+0000 mon.c (mon.1) 512 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.787568+0000 mon.c (mon.1) 513 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.787568+0000 mon.c (mon.1) 513 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.788084+0000 mon.c (mon.1) 514 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.788084+0000 mon.c (mon.1) 514 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.792939+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:35.635 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:35 vm00 bash[69512]: audit 2026-03-09T18:49:34.792939+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:36.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:36 vm08 bash[46122]: audit 2026-03-09T18:49:34.579653+0000 mgr.y (mgr.44107) 438 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:49:36.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:36 vm08 bash[46122]: audit 2026-03-09T18:49:34.579653+0000 mgr.y (mgr.44107) 438 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:49:36.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:36 vm08 bash[46122]: cephadm 2026-03-09T18:49:34.589937+0000 mgr.y (mgr.44107) 439 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.100:5000 to Dashboard 2026-03-09T18:49:36.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:36 vm08 bash[46122]: cephadm 2026-03-09T18:49:34.589937+0000 mgr.y (mgr.44107) 439 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.100:5000 to Dashboard 2026-03-09T18:49:36.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:36 vm08 bash[46122]: audit 2026-03-09T18:49:34.590390+0000 mgr.y (mgr.44107) 440 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:49:36.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:36 vm08 bash[46122]: audit 2026-03-09T18:49:34.590390+0000 mgr.y (mgr.44107) 440 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:49:36.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:36 vm08 bash[46122]: audit 2026-03-09T18:49:34.591485+0000 mgr.y (mgr.44107) 441 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:49:36.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:36 vm08 bash[46122]: audit 2026-03-09T18:49:34.591485+0000 mgr.y (mgr.44107) 441 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:49:36.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:36 vm08 bash[46122]: cephadm 2026-03-09T18:49:34.639007+0000 mgr.y (mgr.44107) 442 : cephadm [INF] Upgrade: Setting container_image for all iscsi 2026-03-09T18:49:36.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:36 vm08 bash[46122]: cephadm 2026-03-09T18:49:34.639007+0000 mgr.y (mgr.44107) 442 : cephadm [INF] Upgrade: Setting container_image for all iscsi 2026-03-09T18:49:36.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:36 vm08 bash[46122]: cephadm 2026-03-09T18:49:34.653878+0000 mgr.y (mgr.44107) 443 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:49:36.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:36 vm08 bash[46122]: cephadm 2026-03-09T18:49:34.653878+0000 mgr.y (mgr.44107) 443 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:49:36.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:36 vm08 bash[46122]: cephadm 2026-03-09T18:49:34.662181+0000 mgr.y (mgr.44107) 444 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:49:36.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:36 vm08 bash[46122]: cephadm 2026-03-09T18:49:34.662181+0000 mgr.y (mgr.44107) 444 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:49:36.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:36 vm08 bash[46122]: cephadm 2026-03-09T18:49:34.676542+0000 mgr.y (mgr.44107) 445 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:49:36.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:36 vm08 bash[46122]: cephadm 2026-03-09T18:49:34.676542+0000 mgr.y (mgr.44107) 445 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:49:36.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:36 vm08 bash[46122]: cephadm 2026-03-09T18:49:34.737342+0000 mgr.y (mgr.44107) 446 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:49:36.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:36 vm08 bash[46122]: cephadm 2026-03-09T18:49:34.737342+0000 mgr.y (mgr.44107) 446 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:36 vm00 bash[69512]: audit 2026-03-09T18:49:34.579653+0000 mgr.y (mgr.44107) 438 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:36 vm00 bash[69512]: audit 2026-03-09T18:49:34.579653+0000 mgr.y (mgr.44107) 438 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:36 vm00 bash[69512]: cephadm 2026-03-09T18:49:34.589937+0000 mgr.y (mgr.44107) 439 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.100:5000 to Dashboard 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:36 vm00 bash[69512]: cephadm 2026-03-09T18:49:34.589937+0000 mgr.y (mgr.44107) 439 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.100:5000 to Dashboard 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:36 vm00 bash[69512]: audit 2026-03-09T18:49:34.590390+0000 mgr.y (mgr.44107) 440 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:36 vm00 bash[69512]: audit 2026-03-09T18:49:34.590390+0000 mgr.y (mgr.44107) 440 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:36 vm00 bash[69512]: audit 2026-03-09T18:49:34.591485+0000 mgr.y (mgr.44107) 441 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:36 vm00 bash[69512]: audit 2026-03-09T18:49:34.591485+0000 mgr.y (mgr.44107) 441 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:36 vm00 bash[69512]: cephadm 2026-03-09T18:49:34.639007+0000 mgr.y (mgr.44107) 442 : cephadm [INF] Upgrade: Setting container_image for all iscsi 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:36 vm00 bash[69512]: cephadm 2026-03-09T18:49:34.639007+0000 mgr.y (mgr.44107) 442 : cephadm [INF] Upgrade: Setting container_image for all iscsi 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:36 vm00 bash[69512]: cephadm 2026-03-09T18:49:34.653878+0000 mgr.y (mgr.44107) 443 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:36 vm00 bash[69512]: cephadm 2026-03-09T18:49:34.653878+0000 mgr.y (mgr.44107) 443 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:36 vm00 bash[69512]: cephadm 2026-03-09T18:49:34.662181+0000 mgr.y (mgr.44107) 444 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:36 vm00 bash[69512]: cephadm 2026-03-09T18:49:34.662181+0000 mgr.y (mgr.44107) 444 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:36 vm00 bash[69512]: cephadm 2026-03-09T18:49:34.676542+0000 mgr.y (mgr.44107) 445 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:36 vm00 bash[69512]: cephadm 2026-03-09T18:49:34.676542+0000 mgr.y (mgr.44107) 445 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:36 vm00 bash[69512]: cephadm 2026-03-09T18:49:34.737342+0000 mgr.y (mgr.44107) 446 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:36 vm00 bash[69512]: cephadm 2026-03-09T18:49:34.737342+0000 mgr.y (mgr.44107) 446 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:36 vm00 bash[65531]: audit 2026-03-09T18:49:34.579653+0000 mgr.y (mgr.44107) 438 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:36 vm00 bash[65531]: audit 2026-03-09T18:49:34.579653+0000 mgr.y (mgr.44107) 438 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:36 vm00 bash[65531]: cephadm 2026-03-09T18:49:34.589937+0000 mgr.y (mgr.44107) 439 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.100:5000 to Dashboard 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:36 vm00 bash[65531]: cephadm 2026-03-09T18:49:34.589937+0000 mgr.y (mgr.44107) 439 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.100:5000 to Dashboard 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:36 vm00 bash[65531]: audit 2026-03-09T18:49:34.590390+0000 mgr.y (mgr.44107) 440 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:36 vm00 bash[65531]: audit 2026-03-09T18:49:34.590390+0000 mgr.y (mgr.44107) 440 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:36 vm00 bash[65531]: audit 2026-03-09T18:49:34.591485+0000 mgr.y (mgr.44107) 441 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:36 vm00 bash[65531]: audit 2026-03-09T18:49:34.591485+0000 mgr.y (mgr.44107) 441 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm00"}]: dispatch 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:36 vm00 bash[65531]: cephadm 2026-03-09T18:49:34.639007+0000 mgr.y (mgr.44107) 442 : cephadm [INF] Upgrade: Setting container_image for all iscsi 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:36 vm00 bash[65531]: cephadm 2026-03-09T18:49:34.639007+0000 mgr.y (mgr.44107) 442 : cephadm [INF] Upgrade: Setting container_image for all iscsi 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:36 vm00 bash[65531]: cephadm 2026-03-09T18:49:34.653878+0000 mgr.y (mgr.44107) 443 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:36 vm00 bash[65531]: cephadm 2026-03-09T18:49:34.653878+0000 mgr.y (mgr.44107) 443 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:36 vm00 bash[65531]: cephadm 2026-03-09T18:49:34.662181+0000 mgr.y (mgr.44107) 444 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:36 vm00 bash[65531]: cephadm 2026-03-09T18:49:34.662181+0000 mgr.y (mgr.44107) 444 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:36 vm00 bash[65531]: cephadm 2026-03-09T18:49:34.676542+0000 mgr.y (mgr.44107) 445 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:36 vm00 bash[65531]: cephadm 2026-03-09T18:49:34.676542+0000 mgr.y (mgr.44107) 445 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:36 vm00 bash[65531]: cephadm 2026-03-09T18:49:34.737342+0000 mgr.y (mgr.44107) 446 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:49:36.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:36 vm00 bash[65531]: cephadm 2026-03-09T18:49:34.737342+0000 mgr.y (mgr.44107) 446 : cephadm [INF] Upgrade: Complete! 2026-03-09T18:49:37.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:37 vm08 bash[46122]: cluster 2026-03-09T18:49:36.123957+0000 mgr.y (mgr.44107) 447 : cluster [DBG] pgmap v230: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:37.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:37 vm08 bash[46122]: cluster 2026-03-09T18:49:36.123957+0000 mgr.y (mgr.44107) 447 : cluster [DBG] pgmap v230: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:37.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:37 vm00 bash[65531]: cluster 2026-03-09T18:49:36.123957+0000 mgr.y (mgr.44107) 447 : cluster [DBG] pgmap v230: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:37.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:37 vm00 bash[65531]: cluster 2026-03-09T18:49:36.123957+0000 mgr.y (mgr.44107) 447 : cluster [DBG] pgmap v230: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:37.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:37 vm00 bash[69512]: cluster 2026-03-09T18:49:36.123957+0000 mgr.y (mgr.44107) 447 : cluster [DBG] pgmap v230: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:37.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:37 vm00 bash[69512]: cluster 2026-03-09T18:49:36.123957+0000 mgr.y (mgr.44107) 447 : cluster [DBG] pgmap v230: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:39.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:39 vm00 bash[65531]: cluster 2026-03-09T18:49:38.124323+0000 mgr.y (mgr.44107) 448 : cluster [DBG] pgmap v231: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T18:49:39.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:39 vm00 bash[65531]: cluster 2026-03-09T18:49:38.124323+0000 mgr.y (mgr.44107) 448 : cluster [DBG] pgmap v231: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T18:49:39.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:39 vm00 bash[65531]: audit 2026-03-09T18:49:38.338213+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:39.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:39 vm00 bash[65531]: audit 2026-03-09T18:49:38.338213+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:39.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:39 vm00 bash[69512]: cluster 2026-03-09T18:49:38.124323+0000 mgr.y (mgr.44107) 448 : cluster [DBG] pgmap v231: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T18:49:39.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:39 vm00 bash[69512]: cluster 2026-03-09T18:49:38.124323+0000 mgr.y (mgr.44107) 448 : cluster [DBG] pgmap v231: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T18:49:39.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:39 vm00 bash[69512]: audit 2026-03-09T18:49:38.338213+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:39.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:39 vm00 bash[69512]: audit 2026-03-09T18:49:38.338213+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:39.629 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:49:39 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:49:39] "GET /metrics HTTP/1.1" 200 37992 "" "Prometheus/2.51.0" 2026-03-09T18:49:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:39 vm08 bash[46122]: cluster 2026-03-09T18:49:38.124323+0000 mgr.y (mgr.44107) 448 : cluster [DBG] pgmap v231: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T18:49:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:39 vm08 bash[46122]: cluster 2026-03-09T18:49:38.124323+0000 mgr.y (mgr.44107) 448 : cluster [DBG] pgmap v231: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T18:49:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:39 vm08 bash[46122]: audit 2026-03-09T18:49:38.338213+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:39 vm08 bash[46122]: audit 2026-03-09T18:49:38.338213+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:49:41.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:41 vm00 bash[65531]: cluster 2026-03-09T18:49:40.124643+0000 mgr.y (mgr.44107) 449 : cluster [DBG] pgmap v232: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 964 B/s rd, 0 op/s 2026-03-09T18:49:41.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:41 vm00 bash[65531]: cluster 2026-03-09T18:49:40.124643+0000 mgr.y (mgr.44107) 449 : cluster [DBG] pgmap v232: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 964 B/s rd, 0 op/s 2026-03-09T18:49:41.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:41 vm00 bash[69512]: cluster 2026-03-09T18:49:40.124643+0000 mgr.y (mgr.44107) 449 : cluster [DBG] pgmap v232: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 964 B/s rd, 0 op/s 2026-03-09T18:49:41.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:41 vm00 bash[69512]: cluster 2026-03-09T18:49:40.124643+0000 mgr.y (mgr.44107) 449 : cluster [DBG] pgmap v232: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 964 B/s rd, 0 op/s 2026-03-09T18:49:41.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:41 vm08 bash[46122]: cluster 2026-03-09T18:49:40.124643+0000 mgr.y (mgr.44107) 449 : cluster [DBG] pgmap v232: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 964 B/s rd, 0 op/s 2026-03-09T18:49:41.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:41 vm08 bash[46122]: cluster 2026-03-09T18:49:40.124643+0000 mgr.y (mgr.44107) 449 : cluster [DBG] pgmap v232: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 964 B/s rd, 0 op/s 2026-03-09T18:49:42.186 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-09T18:49:42.646 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:49:42.646 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (20m) 14s ago 26m 14.6M - 0.25.0 c8568f914cd2 2a8a29ecee54 2026-03-09T18:49:42.646 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (7m) 56s ago 26m 66.7M - 10.4.0 c8b91775d855 5e0e30d27ab2 2026-03-09T18:49:42.646 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (19s) 14s ago 26m 76.3M - 3.9 654f31e6858e 8493aed3ce1d 2026-03-09T18:49:42.646 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443,9283,8765 running (7m) 56s ago 29m 466M - 19.2.3-678-ge911bdeb 654f31e6858e e51f08afe84e 2026-03-09T18:49:42.646 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:8443,9283,8765 running (17m) 14s ago 30m 539M - 19.2.3-678-ge911bdeb 654f31e6858e 838266ced7b2 2026-03-09T18:49:42.646 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (5m) 14s ago 30m 62.0M 2048M 19.2.3-678-ge911bdeb 654f31e6858e eb9fca83668a 2026-03-09T18:49:42.646 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (6m) 56s ago 29m 51.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a343d2673f4 2026-03-09T18:49:42.646 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (6m) 14s ago 29m 52.3M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 5c50cbcbef6b 2026-03-09T18:49:42.646 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (19m) 14s ago 27m 7984k - 1.7.0 72c9c2088986 c2e3e3202fde 2026-03-09T18:49:42.646 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (19m) 56s ago 27m 8267k - 1.7.0 72c9c2088986 7a7d1ed8c801 2026-03-09T18:49:42.646 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (3m) 14s ago 29m 53.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1334681baf1a 2026-03-09T18:49:42.646 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (3m) 14s ago 29m 53.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e b0cddb861a9d 2026-03-09T18:49:42.646 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (4m) 14s ago 28m 51.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9a838e294e64 2026-03-09T18:49:42.646 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (4m) 14s ago 28m 77.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 161fbb574888 2026-03-09T18:49:42.646 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (2m) 56s ago 28m 56.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7575a2bf51cd 2026-03-09T18:49:42.646 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (2m) 56s ago 28m 71.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9263a2afad40 2026-03-09T18:49:42.646 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (2m) 56s ago 27m 48.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e b5db37a03fe5 2026-03-09T18:49:42.646 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (114s) 56s ago 27m 69.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9904fad47d23 2026-03-09T18:49:42.646 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (7m) 56s ago 27m 47.5M - 2.51.0 1d3b7f56885b 2dc1789f7bf0 2026-03-09T18:49:42.646 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (63s) 14s ago 26m 91.4M - 19.2.3-678-ge911bdeb 654f31e6858e c812b26432aa 2026-03-09T18:49:42.646 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (61s) 56s ago 26m 90.9M - 19.2.3-678-ge911bdeb 654f31e6858e a1f2a8ce96e5 2026-03-09T18:49:42.696 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions' 2026-03-09T18:49:43.149 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:49:43.149 INFO:teuthology.orchestra.run.vm00.stdout: "mon": { 2026-03-09T18:49:43.149 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-09T18:49:43.149 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:49:43.149 INFO:teuthology.orchestra.run.vm00.stdout: "mgr": { 2026-03-09T18:49:43.149 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:49:43.149 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:49:43.149 INFO:teuthology.orchestra.run.vm00.stdout: "osd": { 2026-03-09T18:49:43.149 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 8 2026-03-09T18:49:43.149 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:49:43.149 INFO:teuthology.orchestra.run.vm00.stdout: "rgw": { 2026-03-09T18:49:43.149 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:49:43.149 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:49:43.149 INFO:teuthology.orchestra.run.vm00.stdout: "overall": { 2026-03-09T18:49:43.149 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 15 2026-03-09T18:49:43.149 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:49:43.149 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:49:43.206 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'echo "wait for servicemap items w/ changing names to refresh"' 2026-03-09T18:49:43.439 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:43 vm00 bash[69512]: cluster 2026-03-09T18:49:42.125195+0000 mgr.y (mgr.44107) 450 : cluster [DBG] pgmap v233: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:43.439 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:43 vm00 bash[69512]: cluster 2026-03-09T18:49:42.125195+0000 mgr.y (mgr.44107) 450 : cluster [DBG] pgmap v233: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:43.439 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:43 vm00 bash[69512]: audit 2026-03-09T18:49:42.127919+0000 mgr.y (mgr.44107) 451 : audit [DBG] from='client.54705 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:43.439 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:43 vm00 bash[69512]: audit 2026-03-09T18:49:42.127919+0000 mgr.y (mgr.44107) 451 : audit [DBG] from='client.54705 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:43.439 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:43 vm00 bash[69512]: audit 2026-03-09T18:49:43.152523+0000 mon.a (mon.0) 722 : audit [DBG] from='client.? 192.168.123.100:0/3910849870' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:43.440 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:43 vm00 bash[69512]: audit 2026-03-09T18:49:43.152523+0000 mon.a (mon.0) 722 : audit [DBG] from='client.? 192.168.123.100:0/3910849870' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:43.440 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:43 vm00 bash[65531]: cluster 2026-03-09T18:49:42.125195+0000 mgr.y (mgr.44107) 450 : cluster [DBG] pgmap v233: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:43.440 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:43 vm00 bash[65531]: cluster 2026-03-09T18:49:42.125195+0000 mgr.y (mgr.44107) 450 : cluster [DBG] pgmap v233: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:43.440 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:43 vm00 bash[65531]: audit 2026-03-09T18:49:42.127919+0000 mgr.y (mgr.44107) 451 : audit [DBG] from='client.54705 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:43.440 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:43 vm00 bash[65531]: audit 2026-03-09T18:49:42.127919+0000 mgr.y (mgr.44107) 451 : audit [DBG] from='client.54705 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:43.440 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:43 vm00 bash[65531]: audit 2026-03-09T18:49:43.152523+0000 mon.a (mon.0) 722 : audit [DBG] from='client.? 192.168.123.100:0/3910849870' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:43.440 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:43 vm00 bash[65531]: audit 2026-03-09T18:49:43.152523+0000 mon.a (mon.0) 722 : audit [DBG] from='client.? 192.168.123.100:0/3910849870' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:43.457 INFO:teuthology.orchestra.run.vm00.stdout:wait for servicemap items w/ changing names to refresh 2026-03-09T18:49:43.492 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'sleep 60' 2026-03-09T18:49:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:43 vm08 bash[46122]: cluster 2026-03-09T18:49:42.125195+0000 mgr.y (mgr.44107) 450 : cluster [DBG] pgmap v233: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:43 vm08 bash[46122]: cluster 2026-03-09T18:49:42.125195+0000 mgr.y (mgr.44107) 450 : cluster [DBG] pgmap v233: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:43 vm08 bash[46122]: audit 2026-03-09T18:49:42.127919+0000 mgr.y (mgr.44107) 451 : audit [DBG] from='client.54705 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:43 vm08 bash[46122]: audit 2026-03-09T18:49:42.127919+0000 mgr.y (mgr.44107) 451 : audit [DBG] from='client.54705 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:43 vm08 bash[46122]: audit 2026-03-09T18:49:43.152523+0000 mon.a (mon.0) 722 : audit [DBG] from='client.? 192.168.123.100:0/3910849870' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:43 vm08 bash[46122]: audit 2026-03-09T18:49:43.152523+0000 mon.a (mon.0) 722 : audit [DBG] from='client.? 192.168.123.100:0/3910849870' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:49:44.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:44 vm00 bash[65531]: audit 2026-03-09T18:49:42.645506+0000 mgr.y (mgr.44107) 452 : audit [DBG] from='client.54711 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:44.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:44 vm00 bash[65531]: audit 2026-03-09T18:49:42.645506+0000 mgr.y (mgr.44107) 452 : audit [DBG] from='client.54711 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:44.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:44 vm00 bash[69512]: audit 2026-03-09T18:49:42.645506+0000 mgr.y (mgr.44107) 452 : audit [DBG] from='client.54711 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:44.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:44 vm00 bash[69512]: audit 2026-03-09T18:49:42.645506+0000 mgr.y (mgr.44107) 452 : audit [DBG] from='client.54711 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:44.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:44 vm08 bash[46122]: audit 2026-03-09T18:49:42.645506+0000 mgr.y (mgr.44107) 452 : audit [DBG] from='client.54711 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:44.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:44 vm08 bash[46122]: audit 2026-03-09T18:49:42.645506+0000 mgr.y (mgr.44107) 452 : audit [DBG] from='client.54711 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:49:45.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:45 vm00 bash[65531]: audit 2026-03-09T18:49:43.509925+0000 mgr.y (mgr.44107) 453 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:45 vm00 bash[65531]: audit 2026-03-09T18:49:43.509925+0000 mgr.y (mgr.44107) 453 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:45 vm00 bash[65531]: cluster 2026-03-09T18:49:44.125492+0000 mgr.y (mgr.44107) 454 : cluster [DBG] pgmap v234: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:45 vm00 bash[65531]: cluster 2026-03-09T18:49:44.125492+0000 mgr.y (mgr.44107) 454 : cluster [DBG] pgmap v234: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:45 vm00 bash[69512]: audit 2026-03-09T18:49:43.509925+0000 mgr.y (mgr.44107) 453 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:45 vm00 bash[69512]: audit 2026-03-09T18:49:43.509925+0000 mgr.y (mgr.44107) 453 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:45 vm00 bash[69512]: cluster 2026-03-09T18:49:44.125492+0000 mgr.y (mgr.44107) 454 : cluster [DBG] pgmap v234: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:45 vm00 bash[69512]: cluster 2026-03-09T18:49:44.125492+0000 mgr.y (mgr.44107) 454 : cluster [DBG] pgmap v234: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:45.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:45 vm08 bash[46122]: audit 2026-03-09T18:49:43.509925+0000 mgr.y (mgr.44107) 453 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:45.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:45 vm08 bash[46122]: audit 2026-03-09T18:49:43.509925+0000 mgr.y (mgr.44107) 453 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:45.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:45 vm08 bash[46122]: cluster 2026-03-09T18:49:44.125492+0000 mgr.y (mgr.44107) 454 : cluster [DBG] pgmap v234: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:45.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:45 vm08 bash[46122]: cluster 2026-03-09T18:49:44.125492+0000 mgr.y (mgr.44107) 454 : cluster [DBG] pgmap v234: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:47.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:47 vm00 bash[65531]: cluster 2026-03-09T18:49:46.125944+0000 mgr.y (mgr.44107) 455 : cluster [DBG] pgmap v235: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:47.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:47 vm00 bash[65531]: cluster 2026-03-09T18:49:46.125944+0000 mgr.y (mgr.44107) 455 : cluster [DBG] pgmap v235: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:47.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:47 vm00 bash[69512]: cluster 2026-03-09T18:49:46.125944+0000 mgr.y (mgr.44107) 455 : cluster [DBG] pgmap v235: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:47.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:47 vm00 bash[69512]: cluster 2026-03-09T18:49:46.125944+0000 mgr.y (mgr.44107) 455 : cluster [DBG] pgmap v235: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:47 vm08 bash[46122]: cluster 2026-03-09T18:49:46.125944+0000 mgr.y (mgr.44107) 455 : cluster [DBG] pgmap v235: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:47 vm08 bash[46122]: cluster 2026-03-09T18:49:46.125944+0000 mgr.y (mgr.44107) 455 : cluster [DBG] pgmap v235: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:48.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:48 vm00 bash[65531]: audit 2026-03-09T18:49:48.103302+0000 mon.c (mon.1) 515 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:49:48.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:48 vm00 bash[65531]: audit 2026-03-09T18:49:48.103302+0000 mon.c (mon.1) 515 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:49:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:48 vm00 bash[69512]: audit 2026-03-09T18:49:48.103302+0000 mon.c (mon.1) 515 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:49:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:48 vm00 bash[69512]: audit 2026-03-09T18:49:48.103302+0000 mon.c (mon.1) 515 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:49:48.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:48 vm08 bash[46122]: audit 2026-03-09T18:49:48.103302+0000 mon.c (mon.1) 515 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:49:48.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:48 vm08 bash[46122]: audit 2026-03-09T18:49:48.103302+0000 mon.c (mon.1) 515 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:49:49.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:49 vm00 bash[65531]: cluster 2026-03-09T18:49:48.126280+0000 mgr.y (mgr.44107) 456 : cluster [DBG] pgmap v236: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:49.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:49 vm00 bash[65531]: cluster 2026-03-09T18:49:48.126280+0000 mgr.y (mgr.44107) 456 : cluster [DBG] pgmap v236: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:49 vm00 bash[69512]: cluster 2026-03-09T18:49:48.126280+0000 mgr.y (mgr.44107) 456 : cluster [DBG] pgmap v236: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:49.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:49 vm00 bash[69512]: cluster 2026-03-09T18:49:48.126280+0000 mgr.y (mgr.44107) 456 : cluster [DBG] pgmap v236: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:49.629 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:49:49 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:49:49] "GET /metrics HTTP/1.1" 200 37990 "" "Prometheus/2.51.0" 2026-03-09T18:49:49.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:49 vm08 bash[46122]: cluster 2026-03-09T18:49:48.126280+0000 mgr.y (mgr.44107) 456 : cluster [DBG] pgmap v236: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:49.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:49 vm08 bash[46122]: cluster 2026-03-09T18:49:48.126280+0000 mgr.y (mgr.44107) 456 : cluster [DBG] pgmap v236: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:51.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:51 vm08 bash[46122]: cluster 2026-03-09T18:49:50.126589+0000 mgr.y (mgr.44107) 457 : cluster [DBG] pgmap v237: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:51.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:51 vm08 bash[46122]: cluster 2026-03-09T18:49:50.126589+0000 mgr.y (mgr.44107) 457 : cluster [DBG] pgmap v237: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:51.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:51 vm00 bash[65531]: cluster 2026-03-09T18:49:50.126589+0000 mgr.y (mgr.44107) 457 : cluster [DBG] pgmap v237: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:51.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:51 vm00 bash[65531]: cluster 2026-03-09T18:49:50.126589+0000 mgr.y (mgr.44107) 457 : cluster [DBG] pgmap v237: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:51.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:51 vm00 bash[69512]: cluster 2026-03-09T18:49:50.126589+0000 mgr.y (mgr.44107) 457 : cluster [DBG] pgmap v237: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:51.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:51 vm00 bash[69512]: cluster 2026-03-09T18:49:50.126589+0000 mgr.y (mgr.44107) 457 : cluster [DBG] pgmap v237: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:53 vm08 bash[46122]: cluster 2026-03-09T18:49:52.127076+0000 mgr.y (mgr.44107) 458 : cluster [DBG] pgmap v238: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:53.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:53 vm08 bash[46122]: cluster 2026-03-09T18:49:52.127076+0000 mgr.y (mgr.44107) 458 : cluster [DBG] pgmap v238: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:53.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:53 vm00 bash[65531]: cluster 2026-03-09T18:49:52.127076+0000 mgr.y (mgr.44107) 458 : cluster [DBG] pgmap v238: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:53.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:53 vm00 bash[65531]: cluster 2026-03-09T18:49:52.127076+0000 mgr.y (mgr.44107) 458 : cluster [DBG] pgmap v238: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:53.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:53 vm00 bash[69512]: cluster 2026-03-09T18:49:52.127076+0000 mgr.y (mgr.44107) 458 : cluster [DBG] pgmap v238: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:53.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:53 vm00 bash[69512]: cluster 2026-03-09T18:49:52.127076+0000 mgr.y (mgr.44107) 458 : cluster [DBG] pgmap v238: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:55.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:55 vm08 bash[46122]: audit 2026-03-09T18:49:53.520291+0000 mgr.y (mgr.44107) 459 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:55.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:55 vm08 bash[46122]: audit 2026-03-09T18:49:53.520291+0000 mgr.y (mgr.44107) 459 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:55.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:55 vm08 bash[46122]: cluster 2026-03-09T18:49:54.127366+0000 mgr.y (mgr.44107) 460 : cluster [DBG] pgmap v239: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:55.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:55 vm08 bash[46122]: cluster 2026-03-09T18:49:54.127366+0000 mgr.y (mgr.44107) 460 : cluster [DBG] pgmap v239: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:55.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:55 vm00 bash[65531]: audit 2026-03-09T18:49:53.520291+0000 mgr.y (mgr.44107) 459 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:55 vm00 bash[65531]: audit 2026-03-09T18:49:53.520291+0000 mgr.y (mgr.44107) 459 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:55 vm00 bash[65531]: cluster 2026-03-09T18:49:54.127366+0000 mgr.y (mgr.44107) 460 : cluster [DBG] pgmap v239: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:55.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:55 vm00 bash[65531]: cluster 2026-03-09T18:49:54.127366+0000 mgr.y (mgr.44107) 460 : cluster [DBG] pgmap v239: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:55 vm00 bash[69512]: audit 2026-03-09T18:49:53.520291+0000 mgr.y (mgr.44107) 459 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:55 vm00 bash[69512]: audit 2026-03-09T18:49:53.520291+0000 mgr.y (mgr.44107) 459 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:49:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:55 vm00 bash[69512]: cluster 2026-03-09T18:49:54.127366+0000 mgr.y (mgr.44107) 460 : cluster [DBG] pgmap v239: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:55.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:55 vm00 bash[69512]: cluster 2026-03-09T18:49:54.127366+0000 mgr.y (mgr.44107) 460 : cluster [DBG] pgmap v239: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:57.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:57 vm08 bash[46122]: cluster 2026-03-09T18:49:56.127772+0000 mgr.y (mgr.44107) 461 : cluster [DBG] pgmap v240: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:57.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:57 vm08 bash[46122]: cluster 2026-03-09T18:49:56.127772+0000 mgr.y (mgr.44107) 461 : cluster [DBG] pgmap v240: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:57.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:57 vm00 bash[65531]: cluster 2026-03-09T18:49:56.127772+0000 mgr.y (mgr.44107) 461 : cluster [DBG] pgmap v240: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:57.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:57 vm00 bash[65531]: cluster 2026-03-09T18:49:56.127772+0000 mgr.y (mgr.44107) 461 : cluster [DBG] pgmap v240: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:57.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:57 vm00 bash[69512]: cluster 2026-03-09T18:49:56.127772+0000 mgr.y (mgr.44107) 461 : cluster [DBG] pgmap v240: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:57.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:57 vm00 bash[69512]: cluster 2026-03-09T18:49:56.127772+0000 mgr.y (mgr.44107) 461 : cluster [DBG] pgmap v240: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:49:59.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:59 vm08 bash[46122]: cluster 2026-03-09T18:49:58.128203+0000 mgr.y (mgr.44107) 462 : cluster [DBG] pgmap v241: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:59.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:49:59 vm08 bash[46122]: cluster 2026-03-09T18:49:58.128203+0000 mgr.y (mgr.44107) 462 : cluster [DBG] pgmap v241: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:59.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:59 vm00 bash[65531]: cluster 2026-03-09T18:49:58.128203+0000 mgr.y (mgr.44107) 462 : cluster [DBG] pgmap v241: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:59.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:49:59 vm00 bash[65531]: cluster 2026-03-09T18:49:58.128203+0000 mgr.y (mgr.44107) 462 : cluster [DBG] pgmap v241: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:59 vm00 bash[69512]: cluster 2026-03-09T18:49:58.128203+0000 mgr.y (mgr.44107) 462 : cluster [DBG] pgmap v241: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:59.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:49:59 vm00 bash[69512]: cluster 2026-03-09T18:49:58.128203+0000 mgr.y (mgr.44107) 462 : cluster [DBG] pgmap v241: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:49:59.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:49:59 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:49:59] "GET /metrics HTTP/1.1" 200 37990 "" "Prometheus/2.51.0" 2026-03-09T18:50:00.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:00 vm08 bash[46122]: cluster 2026-03-09T18:50:00.000138+0000 mon.a (mon.0) 723 : cluster [INF] overall HEALTH_OK 2026-03-09T18:50:00.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:00 vm08 bash[46122]: cluster 2026-03-09T18:50:00.000138+0000 mon.a (mon.0) 723 : cluster [INF] overall HEALTH_OK 2026-03-09T18:50:00.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:00 vm00 bash[65531]: cluster 2026-03-09T18:50:00.000138+0000 mon.a (mon.0) 723 : cluster [INF] overall HEALTH_OK 2026-03-09T18:50:00.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:00 vm00 bash[65531]: cluster 2026-03-09T18:50:00.000138+0000 mon.a (mon.0) 723 : cluster [INF] overall HEALTH_OK 2026-03-09T18:50:00.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:00 vm00 bash[69512]: cluster 2026-03-09T18:50:00.000138+0000 mon.a (mon.0) 723 : cluster [INF] overall HEALTH_OK 2026-03-09T18:50:00.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:00 vm00 bash[69512]: cluster 2026-03-09T18:50:00.000138+0000 mon.a (mon.0) 723 : cluster [INF] overall HEALTH_OK 2026-03-09T18:50:01.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:01 vm08 bash[46122]: cluster 2026-03-09T18:50:00.128618+0000 mgr.y (mgr.44107) 463 : cluster [DBG] pgmap v242: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:01.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:01 vm08 bash[46122]: cluster 2026-03-09T18:50:00.128618+0000 mgr.y (mgr.44107) 463 : cluster [DBG] pgmap v242: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:01.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:01 vm00 bash[65531]: cluster 2026-03-09T18:50:00.128618+0000 mgr.y (mgr.44107) 463 : cluster [DBG] pgmap v242: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:01.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:01 vm00 bash[65531]: cluster 2026-03-09T18:50:00.128618+0000 mgr.y (mgr.44107) 463 : cluster [DBG] pgmap v242: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:01.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:01 vm00 bash[69512]: cluster 2026-03-09T18:50:00.128618+0000 mgr.y (mgr.44107) 463 : cluster [DBG] pgmap v242: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:01.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:01 vm00 bash[69512]: cluster 2026-03-09T18:50:00.128618+0000 mgr.y (mgr.44107) 463 : cluster [DBG] pgmap v242: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:03.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:03 vm08 bash[46122]: cluster 2026-03-09T18:50:02.129091+0000 mgr.y (mgr.44107) 464 : cluster [DBG] pgmap v243: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:03.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:03 vm08 bash[46122]: cluster 2026-03-09T18:50:02.129091+0000 mgr.y (mgr.44107) 464 : cluster [DBG] pgmap v243: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:03.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:03 vm08 bash[46122]: audit 2026-03-09T18:50:03.103700+0000 mon.c (mon.1) 516 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:50:03.728 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:03 vm08 bash[46122]: audit 2026-03-09T18:50:03.103700+0000 mon.c (mon.1) 516 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:50:03.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:03 vm00 bash[65531]: cluster 2026-03-09T18:50:02.129091+0000 mgr.y (mgr.44107) 464 : cluster [DBG] pgmap v243: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:03 vm00 bash[65531]: cluster 2026-03-09T18:50:02.129091+0000 mgr.y (mgr.44107) 464 : cluster [DBG] pgmap v243: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:03 vm00 bash[65531]: audit 2026-03-09T18:50:03.103700+0000 mon.c (mon.1) 516 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:50:03.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:03 vm00 bash[65531]: audit 2026-03-09T18:50:03.103700+0000 mon.c (mon.1) 516 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:50:03.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:03 vm00 bash[69512]: cluster 2026-03-09T18:50:02.129091+0000 mgr.y (mgr.44107) 464 : cluster [DBG] pgmap v243: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:03.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:03 vm00 bash[69512]: cluster 2026-03-09T18:50:02.129091+0000 mgr.y (mgr.44107) 464 : cluster [DBG] pgmap v243: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:03.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:03 vm00 bash[69512]: audit 2026-03-09T18:50:03.103700+0000 mon.c (mon.1) 516 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:50:03.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:03 vm00 bash[69512]: audit 2026-03-09T18:50:03.103700+0000 mon.c (mon.1) 516 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:50:05.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:05 vm08 bash[46122]: audit 2026-03-09T18:50:03.523144+0000 mgr.y (mgr.44107) 465 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:05.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:05 vm08 bash[46122]: audit 2026-03-09T18:50:03.523144+0000 mgr.y (mgr.44107) 465 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:05.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:05 vm08 bash[46122]: cluster 2026-03-09T18:50:04.129433+0000 mgr.y (mgr.44107) 466 : cluster [DBG] pgmap v244: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:05.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:05 vm08 bash[46122]: cluster 2026-03-09T18:50:04.129433+0000 mgr.y (mgr.44107) 466 : cluster [DBG] pgmap v244: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:05.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:05 vm00 bash[65531]: audit 2026-03-09T18:50:03.523144+0000 mgr.y (mgr.44107) 465 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:05.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:05 vm00 bash[65531]: audit 2026-03-09T18:50:03.523144+0000 mgr.y (mgr.44107) 465 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:05.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:05 vm00 bash[65531]: cluster 2026-03-09T18:50:04.129433+0000 mgr.y (mgr.44107) 466 : cluster [DBG] pgmap v244: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:05.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:05 vm00 bash[65531]: cluster 2026-03-09T18:50:04.129433+0000 mgr.y (mgr.44107) 466 : cluster [DBG] pgmap v244: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:05.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:05 vm00 bash[69512]: audit 2026-03-09T18:50:03.523144+0000 mgr.y (mgr.44107) 465 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:05.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:05 vm00 bash[69512]: audit 2026-03-09T18:50:03.523144+0000 mgr.y (mgr.44107) 465 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:05.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:05 vm00 bash[69512]: cluster 2026-03-09T18:50:04.129433+0000 mgr.y (mgr.44107) 466 : cluster [DBG] pgmap v244: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:05.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:05 vm00 bash[69512]: cluster 2026-03-09T18:50:04.129433+0000 mgr.y (mgr.44107) 466 : cluster [DBG] pgmap v244: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:07.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:07 vm08 bash[46122]: cluster 2026-03-09T18:50:06.129865+0000 mgr.y (mgr.44107) 467 : cluster [DBG] pgmap v245: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:07.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:07 vm08 bash[46122]: cluster 2026-03-09T18:50:06.129865+0000 mgr.y (mgr.44107) 467 : cluster [DBG] pgmap v245: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:07.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:07 vm00 bash[65531]: cluster 2026-03-09T18:50:06.129865+0000 mgr.y (mgr.44107) 467 : cluster [DBG] pgmap v245: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:07.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:07 vm00 bash[65531]: cluster 2026-03-09T18:50:06.129865+0000 mgr.y (mgr.44107) 467 : cluster [DBG] pgmap v245: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:07.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:07 vm00 bash[69512]: cluster 2026-03-09T18:50:06.129865+0000 mgr.y (mgr.44107) 467 : cluster [DBG] pgmap v245: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:07.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:07 vm00 bash[69512]: cluster 2026-03-09T18:50:06.129865+0000 mgr.y (mgr.44107) 467 : cluster [DBG] pgmap v245: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:09.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:09 vm08 bash[46122]: cluster 2026-03-09T18:50:08.130335+0000 mgr.y (mgr.44107) 468 : cluster [DBG] pgmap v246: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:09.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:09 vm08 bash[46122]: cluster 2026-03-09T18:50:08.130335+0000 mgr.y (mgr.44107) 468 : cluster [DBG] pgmap v246: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:09.878 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:09 vm00 bash[69512]: cluster 2026-03-09T18:50:08.130335+0000 mgr.y (mgr.44107) 468 : cluster [DBG] pgmap v246: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:09.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:09 vm00 bash[69512]: cluster 2026-03-09T18:50:08.130335+0000 mgr.y (mgr.44107) 468 : cluster [DBG] pgmap v246: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:09.879 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:50:09 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:50:09] "GET /metrics HTTP/1.1" 200 37989 "" "Prometheus/2.51.0" 2026-03-09T18:50:09.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:09 vm00 bash[65531]: cluster 2026-03-09T18:50:08.130335+0000 mgr.y (mgr.44107) 468 : cluster [DBG] pgmap v246: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:09.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:09 vm00 bash[65531]: cluster 2026-03-09T18:50:08.130335+0000 mgr.y (mgr.44107) 468 : cluster [DBG] pgmap v246: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:11.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:11 vm08 bash[46122]: cluster 2026-03-09T18:50:10.130705+0000 mgr.y (mgr.44107) 469 : cluster [DBG] pgmap v247: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:11.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:11 vm08 bash[46122]: cluster 2026-03-09T18:50:10.130705+0000 mgr.y (mgr.44107) 469 : cluster [DBG] pgmap v247: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:11.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:11 vm00 bash[65531]: cluster 2026-03-09T18:50:10.130705+0000 mgr.y (mgr.44107) 469 : cluster [DBG] pgmap v247: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:11.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:11 vm00 bash[65531]: cluster 2026-03-09T18:50:10.130705+0000 mgr.y (mgr.44107) 469 : cluster [DBG] pgmap v247: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:11.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:11 vm00 bash[69512]: cluster 2026-03-09T18:50:10.130705+0000 mgr.y (mgr.44107) 469 : cluster [DBG] pgmap v247: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:11.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:11 vm00 bash[69512]: cluster 2026-03-09T18:50:10.130705+0000 mgr.y (mgr.44107) 469 : cluster [DBG] pgmap v247: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:12 vm08 bash[46122]: cluster 2026-03-09T18:50:12.131213+0000 mgr.y (mgr.44107) 470 : cluster [DBG] pgmap v248: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:12.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:12 vm08 bash[46122]: cluster 2026-03-09T18:50:12.131213+0000 mgr.y (mgr.44107) 470 : cluster [DBG] pgmap v248: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:12.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:12 vm00 bash[65531]: cluster 2026-03-09T18:50:12.131213+0000 mgr.y (mgr.44107) 470 : cluster [DBG] pgmap v248: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:12.879 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:12 vm00 bash[65531]: cluster 2026-03-09T18:50:12.131213+0000 mgr.y (mgr.44107) 470 : cluster [DBG] pgmap v248: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:12.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:12 vm00 bash[69512]: cluster 2026-03-09T18:50:12.131213+0000 mgr.y (mgr.44107) 470 : cluster [DBG] pgmap v248: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:12.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:12 vm00 bash[69512]: cluster 2026-03-09T18:50:12.131213+0000 mgr.y (mgr.44107) 470 : cluster [DBG] pgmap v248: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:15.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:15 vm08 bash[46122]: audit 2026-03-09T18:50:13.530765+0000 mgr.y (mgr.44107) 471 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:15.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:15 vm08 bash[46122]: audit 2026-03-09T18:50:13.530765+0000 mgr.y (mgr.44107) 471 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:15.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:15 vm08 bash[46122]: cluster 2026-03-09T18:50:14.131487+0000 mgr.y (mgr.44107) 472 : cluster [DBG] pgmap v249: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:15.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:15 vm08 bash[46122]: cluster 2026-03-09T18:50:14.131487+0000 mgr.y (mgr.44107) 472 : cluster [DBG] pgmap v249: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:15.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:15 vm00 bash[65531]: audit 2026-03-09T18:50:13.530765+0000 mgr.y (mgr.44107) 471 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:15 vm00 bash[65531]: audit 2026-03-09T18:50:13.530765+0000 mgr.y (mgr.44107) 471 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:15 vm00 bash[65531]: cluster 2026-03-09T18:50:14.131487+0000 mgr.y (mgr.44107) 472 : cluster [DBG] pgmap v249: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:15.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:15 vm00 bash[65531]: cluster 2026-03-09T18:50:14.131487+0000 mgr.y (mgr.44107) 472 : cluster [DBG] pgmap v249: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:15.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:15 vm00 bash[69512]: audit 2026-03-09T18:50:13.530765+0000 mgr.y (mgr.44107) 471 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:15.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:15 vm00 bash[69512]: audit 2026-03-09T18:50:13.530765+0000 mgr.y (mgr.44107) 471 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:15.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:15 vm00 bash[69512]: cluster 2026-03-09T18:50:14.131487+0000 mgr.y (mgr.44107) 472 : cluster [DBG] pgmap v249: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:15.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:15 vm00 bash[69512]: cluster 2026-03-09T18:50:14.131487+0000 mgr.y (mgr.44107) 472 : cluster [DBG] pgmap v249: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:17.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:17 vm08 bash[46122]: cluster 2026-03-09T18:50:16.131882+0000 mgr.y (mgr.44107) 473 : cluster [DBG] pgmap v250: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:17.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:17 vm08 bash[46122]: cluster 2026-03-09T18:50:16.131882+0000 mgr.y (mgr.44107) 473 : cluster [DBG] pgmap v250: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:17.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:17 vm00 bash[65531]: cluster 2026-03-09T18:50:16.131882+0000 mgr.y (mgr.44107) 473 : cluster [DBG] pgmap v250: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:17.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:17 vm00 bash[65531]: cluster 2026-03-09T18:50:16.131882+0000 mgr.y (mgr.44107) 473 : cluster [DBG] pgmap v250: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:17 vm00 bash[69512]: cluster 2026-03-09T18:50:16.131882+0000 mgr.y (mgr.44107) 473 : cluster [DBG] pgmap v250: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:17.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:17 vm00 bash[69512]: cluster 2026-03-09T18:50:16.131882+0000 mgr.y (mgr.44107) 473 : cluster [DBG] pgmap v250: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:18.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:18 vm08 bash[46122]: audit 2026-03-09T18:50:18.103871+0000 mon.c (mon.1) 517 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:50:18.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:18 vm08 bash[46122]: audit 2026-03-09T18:50:18.103871+0000 mon.c (mon.1) 517 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:50:18.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:18 vm00 bash[65531]: audit 2026-03-09T18:50:18.103871+0000 mon.c (mon.1) 517 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:50:18.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:18 vm00 bash[65531]: audit 2026-03-09T18:50:18.103871+0000 mon.c (mon.1) 517 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:50:18.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:18 vm00 bash[69512]: audit 2026-03-09T18:50:18.103871+0000 mon.c (mon.1) 517 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:50:18.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:18 vm00 bash[69512]: audit 2026-03-09T18:50:18.103871+0000 mon.c (mon.1) 517 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:50:19.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:19 vm08 bash[46122]: cluster 2026-03-09T18:50:18.132224+0000 mgr.y (mgr.44107) 474 : cluster [DBG] pgmap v251: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:19.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:19 vm08 bash[46122]: cluster 2026-03-09T18:50:18.132224+0000 mgr.y (mgr.44107) 474 : cluster [DBG] pgmap v251: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:19.523 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:19 vm00 bash[69512]: cluster 2026-03-09T18:50:18.132224+0000 mgr.y (mgr.44107) 474 : cluster [DBG] pgmap v251: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:19.523 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:19 vm00 bash[69512]: cluster 2026-03-09T18:50:18.132224+0000 mgr.y (mgr.44107) 474 : cluster [DBG] pgmap v251: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:19.523 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:19 vm00 bash[65531]: cluster 2026-03-09T18:50:18.132224+0000 mgr.y (mgr.44107) 474 : cluster [DBG] pgmap v251: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:19.523 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:19 vm00 bash[65531]: cluster 2026-03-09T18:50:18.132224+0000 mgr.y (mgr.44107) 474 : cluster [DBG] pgmap v251: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:19.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:50:19 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:50:19] "GET /metrics HTTP/1.1" 200 37993 "" "Prometheus/2.51.0" 2026-03-09T18:50:21.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:21 vm08 bash[46122]: cluster 2026-03-09T18:50:20.132561+0000 mgr.y (mgr.44107) 475 : cluster [DBG] pgmap v252: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:21.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:21 vm08 bash[46122]: cluster 2026-03-09T18:50:20.132561+0000 mgr.y (mgr.44107) 475 : cluster [DBG] pgmap v252: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:21.628 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:21 vm00 bash[69512]: cluster 2026-03-09T18:50:20.132561+0000 mgr.y (mgr.44107) 475 : cluster [DBG] pgmap v252: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:21.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:21 vm00 bash[69512]: cluster 2026-03-09T18:50:20.132561+0000 mgr.y (mgr.44107) 475 : cluster [DBG] pgmap v252: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:21.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:21 vm00 bash[65531]: cluster 2026-03-09T18:50:20.132561+0000 mgr.y (mgr.44107) 475 : cluster [DBG] pgmap v252: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:21.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:21 vm00 bash[65531]: cluster 2026-03-09T18:50:20.132561+0000 mgr.y (mgr.44107) 475 : cluster [DBG] pgmap v252: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:23.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:23 vm08 bash[46122]: cluster 2026-03-09T18:50:22.133025+0000 mgr.y (mgr.44107) 476 : cluster [DBG] pgmap v253: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:23.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:23 vm08 bash[46122]: cluster 2026-03-09T18:50:22.133025+0000 mgr.y (mgr.44107) 476 : cluster [DBG] pgmap v253: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:23.531 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:23 vm00 bash[69512]: cluster 2026-03-09T18:50:22.133025+0000 mgr.y (mgr.44107) 476 : cluster [DBG] pgmap v253: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:23.531 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:23 vm00 bash[69512]: cluster 2026-03-09T18:50:22.133025+0000 mgr.y (mgr.44107) 476 : cluster [DBG] pgmap v253: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:23.532 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:23 vm00 bash[65531]: cluster 2026-03-09T18:50:22.133025+0000 mgr.y (mgr.44107) 476 : cluster [DBG] pgmap v253: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:23.532 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:23 vm00 bash[65531]: cluster 2026-03-09T18:50:22.133025+0000 mgr.y (mgr.44107) 476 : cluster [DBG] pgmap v253: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:25.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:25 vm00 bash[65531]: audit 2026-03-09T18:50:23.535257+0000 mgr.y (mgr.44107) 477 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:25 vm00 bash[65531]: audit 2026-03-09T18:50:23.535257+0000 mgr.y (mgr.44107) 477 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:25 vm00 bash[65531]: cluster 2026-03-09T18:50:24.133394+0000 mgr.y (mgr.44107) 478 : cluster [DBG] pgmap v254: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:25.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:25 vm00 bash[65531]: cluster 2026-03-09T18:50:24.133394+0000 mgr.y (mgr.44107) 478 : cluster [DBG] pgmap v254: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:25.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:25 vm00 bash[69512]: audit 2026-03-09T18:50:23.535257+0000 mgr.y (mgr.44107) 477 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:25.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:25 vm00 bash[69512]: audit 2026-03-09T18:50:23.535257+0000 mgr.y (mgr.44107) 477 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:25.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:25 vm00 bash[69512]: cluster 2026-03-09T18:50:24.133394+0000 mgr.y (mgr.44107) 478 : cluster [DBG] pgmap v254: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:25.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:25 vm00 bash[69512]: cluster 2026-03-09T18:50:24.133394+0000 mgr.y (mgr.44107) 478 : cluster [DBG] pgmap v254: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:25.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:25 vm08 bash[46122]: audit 2026-03-09T18:50:23.535257+0000 mgr.y (mgr.44107) 477 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:25.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:25 vm08 bash[46122]: audit 2026-03-09T18:50:23.535257+0000 mgr.y (mgr.44107) 477 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:25.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:25 vm08 bash[46122]: cluster 2026-03-09T18:50:24.133394+0000 mgr.y (mgr.44107) 478 : cluster [DBG] pgmap v254: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:25.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:25 vm08 bash[46122]: cluster 2026-03-09T18:50:24.133394+0000 mgr.y (mgr.44107) 478 : cluster [DBG] pgmap v254: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:27.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:27 vm00 bash[65531]: cluster 2026-03-09T18:50:26.133904+0000 mgr.y (mgr.44107) 479 : cluster [DBG] pgmap v255: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:27.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:27 vm00 bash[65531]: cluster 2026-03-09T18:50:26.133904+0000 mgr.y (mgr.44107) 479 : cluster [DBG] pgmap v255: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:27.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:27 vm00 bash[69512]: cluster 2026-03-09T18:50:26.133904+0000 mgr.y (mgr.44107) 479 : cluster [DBG] pgmap v255: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:27.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:27 vm00 bash[69512]: cluster 2026-03-09T18:50:26.133904+0000 mgr.y (mgr.44107) 479 : cluster [DBG] pgmap v255: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:27.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:27 vm08 bash[46122]: cluster 2026-03-09T18:50:26.133904+0000 mgr.y (mgr.44107) 479 : cluster [DBG] pgmap v255: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:27.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:27 vm08 bash[46122]: cluster 2026-03-09T18:50:26.133904+0000 mgr.y (mgr.44107) 479 : cluster [DBG] pgmap v255: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:29.523 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:29 vm00 bash[65531]: cluster 2026-03-09T18:50:28.134363+0000 mgr.y (mgr.44107) 480 : cluster [DBG] pgmap v256: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:29.523 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:29 vm00 bash[65531]: cluster 2026-03-09T18:50:28.134363+0000 mgr.y (mgr.44107) 480 : cluster [DBG] pgmap v256: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:29.523 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:29 vm00 bash[69512]: cluster 2026-03-09T18:50:28.134363+0000 mgr.y (mgr.44107) 480 : cluster [DBG] pgmap v256: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:29.523 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:29 vm00 bash[69512]: cluster 2026-03-09T18:50:28.134363+0000 mgr.y (mgr.44107) 480 : cluster [DBG] pgmap v256: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:29.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:29 vm08 bash[46122]: cluster 2026-03-09T18:50:28.134363+0000 mgr.y (mgr.44107) 480 : cluster [DBG] pgmap v256: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:29.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:29 vm08 bash[46122]: cluster 2026-03-09T18:50:28.134363+0000 mgr.y (mgr.44107) 480 : cluster [DBG] pgmap v256: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:29.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:50:29 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:50:29] "GET /metrics HTTP/1.1" 200 37993 "" "Prometheus/2.51.0" 2026-03-09T18:50:31.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:31 vm00 bash[65531]: cluster 2026-03-09T18:50:30.134745+0000 mgr.y (mgr.44107) 481 : cluster [DBG] pgmap v257: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:31.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:31 vm00 bash[65531]: cluster 2026-03-09T18:50:30.134745+0000 mgr.y (mgr.44107) 481 : cluster [DBG] pgmap v257: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:31.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:31 vm00 bash[69512]: cluster 2026-03-09T18:50:30.134745+0000 mgr.y (mgr.44107) 481 : cluster [DBG] pgmap v257: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:31.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:31 vm00 bash[69512]: cluster 2026-03-09T18:50:30.134745+0000 mgr.y (mgr.44107) 481 : cluster [DBG] pgmap v257: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:31.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:31 vm08 bash[46122]: cluster 2026-03-09T18:50:30.134745+0000 mgr.y (mgr.44107) 481 : cluster [DBG] pgmap v257: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:31.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:31 vm08 bash[46122]: cluster 2026-03-09T18:50:30.134745+0000 mgr.y (mgr.44107) 481 : cluster [DBG] pgmap v257: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:33.533 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:33 vm00 bash[65531]: cluster 2026-03-09T18:50:32.135227+0000 mgr.y (mgr.44107) 482 : cluster [DBG] pgmap v258: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:33.533 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:33 vm00 bash[65531]: cluster 2026-03-09T18:50:32.135227+0000 mgr.y (mgr.44107) 482 : cluster [DBG] pgmap v258: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:33.534 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:33 vm00 bash[65531]: audit 2026-03-09T18:50:33.104052+0000 mon.c (mon.1) 518 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:50:33.534 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:33 vm00 bash[65531]: audit 2026-03-09T18:50:33.104052+0000 mon.c (mon.1) 518 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:50:33.534 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:33 vm00 bash[69512]: cluster 2026-03-09T18:50:32.135227+0000 mgr.y (mgr.44107) 482 : cluster [DBG] pgmap v258: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:33.534 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:33 vm00 bash[69512]: cluster 2026-03-09T18:50:32.135227+0000 mgr.y (mgr.44107) 482 : cluster [DBG] pgmap v258: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:33.534 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:33 vm00 bash[69512]: audit 2026-03-09T18:50:33.104052+0000 mon.c (mon.1) 518 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:50:33.534 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:33 vm00 bash[69512]: audit 2026-03-09T18:50:33.104052+0000 mon.c (mon.1) 518 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:50:33.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:33 vm08 bash[46122]: cluster 2026-03-09T18:50:32.135227+0000 mgr.y (mgr.44107) 482 : cluster [DBG] pgmap v258: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:33.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:33 vm08 bash[46122]: cluster 2026-03-09T18:50:32.135227+0000 mgr.y (mgr.44107) 482 : cluster [DBG] pgmap v258: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:33.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:33 vm08 bash[46122]: audit 2026-03-09T18:50:33.104052+0000 mon.c (mon.1) 518 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:50:33.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:33 vm08 bash[46122]: audit 2026-03-09T18:50:33.104052+0000 mon.c (mon.1) 518 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:50:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:35 vm00 bash[65531]: audit 2026-03-09T18:50:33.537388+0000 mgr.y (mgr.44107) 483 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:35 vm00 bash[65531]: audit 2026-03-09T18:50:33.537388+0000 mgr.y (mgr.44107) 483 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:35 vm00 bash[65531]: cluster 2026-03-09T18:50:34.135542+0000 mgr.y (mgr.44107) 484 : cluster [DBG] pgmap v259: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:35 vm00 bash[65531]: cluster 2026-03-09T18:50:34.135542+0000 mgr.y (mgr.44107) 484 : cluster [DBG] pgmap v259: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:35 vm00 bash[65531]: audit 2026-03-09T18:50:34.833241+0000 mon.c (mon.1) 519 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:50:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:35 vm00 bash[65531]: audit 2026-03-09T18:50:34.833241+0000 mon.c (mon.1) 519 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:50:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:35 vm00 bash[65531]: audit 2026-03-09T18:50:35.143114+0000 mon.c (mon.1) 520 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:50:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:35 vm00 bash[65531]: audit 2026-03-09T18:50:35.143114+0000 mon.c (mon.1) 520 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:50:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:35 vm00 bash[65531]: audit 2026-03-09T18:50:35.143737+0000 mon.c (mon.1) 521 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:50:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:35 vm00 bash[65531]: audit 2026-03-09T18:50:35.143737+0000 mon.c (mon.1) 521 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:50:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:35 vm00 bash[65531]: audit 2026-03-09T18:50:35.149119+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:50:35.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:35 vm00 bash[65531]: audit 2026-03-09T18:50:35.149119+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:50:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:35 vm00 bash[69512]: audit 2026-03-09T18:50:33.537388+0000 mgr.y (mgr.44107) 483 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:35 vm00 bash[69512]: audit 2026-03-09T18:50:33.537388+0000 mgr.y (mgr.44107) 483 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:35 vm00 bash[69512]: cluster 2026-03-09T18:50:34.135542+0000 mgr.y (mgr.44107) 484 : cluster [DBG] pgmap v259: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:35 vm00 bash[69512]: cluster 2026-03-09T18:50:34.135542+0000 mgr.y (mgr.44107) 484 : cluster [DBG] pgmap v259: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:35 vm00 bash[69512]: audit 2026-03-09T18:50:34.833241+0000 mon.c (mon.1) 519 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:50:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:35 vm00 bash[69512]: audit 2026-03-09T18:50:34.833241+0000 mon.c (mon.1) 519 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:50:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:35 vm00 bash[69512]: audit 2026-03-09T18:50:35.143114+0000 mon.c (mon.1) 520 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:50:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:35 vm00 bash[69512]: audit 2026-03-09T18:50:35.143114+0000 mon.c (mon.1) 520 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:50:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:35 vm00 bash[69512]: audit 2026-03-09T18:50:35.143737+0000 mon.c (mon.1) 521 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:50:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:35 vm00 bash[69512]: audit 2026-03-09T18:50:35.143737+0000 mon.c (mon.1) 521 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:50:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:35 vm00 bash[69512]: audit 2026-03-09T18:50:35.149119+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:50:35.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:35 vm00 bash[69512]: audit 2026-03-09T18:50:35.149119+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:50:35.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:35 vm08 bash[46122]: audit 2026-03-09T18:50:33.537388+0000 mgr.y (mgr.44107) 483 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:35.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:35 vm08 bash[46122]: audit 2026-03-09T18:50:33.537388+0000 mgr.y (mgr.44107) 483 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:35.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:35 vm08 bash[46122]: cluster 2026-03-09T18:50:34.135542+0000 mgr.y (mgr.44107) 484 : cluster [DBG] pgmap v259: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:35.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:35 vm08 bash[46122]: cluster 2026-03-09T18:50:34.135542+0000 mgr.y (mgr.44107) 484 : cluster [DBG] pgmap v259: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:35.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:35 vm08 bash[46122]: audit 2026-03-09T18:50:34.833241+0000 mon.c (mon.1) 519 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:50:35.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:35 vm08 bash[46122]: audit 2026-03-09T18:50:34.833241+0000 mon.c (mon.1) 519 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:50:35.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:35 vm08 bash[46122]: audit 2026-03-09T18:50:35.143114+0000 mon.c (mon.1) 520 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:50:35.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:35 vm08 bash[46122]: audit 2026-03-09T18:50:35.143114+0000 mon.c (mon.1) 520 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:50:35.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:35 vm08 bash[46122]: audit 2026-03-09T18:50:35.143737+0000 mon.c (mon.1) 521 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:50:35.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:35 vm08 bash[46122]: audit 2026-03-09T18:50:35.143737+0000 mon.c (mon.1) 521 : audit [INF] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:50:35.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:35 vm08 bash[46122]: audit 2026-03-09T18:50:35.149119+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:50:35.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:35 vm08 bash[46122]: audit 2026-03-09T18:50:35.149119+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-09T18:50:37.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:37 vm00 bash[65531]: cluster 2026-03-09T18:50:36.136076+0000 mgr.y (mgr.44107) 485 : cluster [DBG] pgmap v260: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:37.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:37 vm00 bash[65531]: cluster 2026-03-09T18:50:36.136076+0000 mgr.y (mgr.44107) 485 : cluster [DBG] pgmap v260: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:37.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:37 vm00 bash[69512]: cluster 2026-03-09T18:50:36.136076+0000 mgr.y (mgr.44107) 485 : cluster [DBG] pgmap v260: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:37.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:37 vm00 bash[69512]: cluster 2026-03-09T18:50:36.136076+0000 mgr.y (mgr.44107) 485 : cluster [DBG] pgmap v260: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:37.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:37 vm08 bash[46122]: cluster 2026-03-09T18:50:36.136076+0000 mgr.y (mgr.44107) 485 : cluster [DBG] pgmap v260: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:37.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:37 vm08 bash[46122]: cluster 2026-03-09T18:50:36.136076+0000 mgr.y (mgr.44107) 485 : cluster [DBG] pgmap v260: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:39.523 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:39 vm00 bash[65531]: cluster 2026-03-09T18:50:38.136563+0000 mgr.y (mgr.44107) 486 : cluster [DBG] pgmap v261: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:39.523 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:39 vm00 bash[65531]: cluster 2026-03-09T18:50:38.136563+0000 mgr.y (mgr.44107) 486 : cluster [DBG] pgmap v261: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:39.524 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:39 vm00 bash[69512]: cluster 2026-03-09T18:50:38.136563+0000 mgr.y (mgr.44107) 486 : cluster [DBG] pgmap v261: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:39.524 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:39 vm00 bash[69512]: cluster 2026-03-09T18:50:38.136563+0000 mgr.y (mgr.44107) 486 : cluster [DBG] pgmap v261: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:39 vm08 bash[46122]: cluster 2026-03-09T18:50:38.136563+0000 mgr.y (mgr.44107) 486 : cluster [DBG] pgmap v261: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:39.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:39 vm08 bash[46122]: cluster 2026-03-09T18:50:38.136563+0000 mgr.y (mgr.44107) 486 : cluster [DBG] pgmap v261: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:39.878 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:50:39 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:50:39] "GET /metrics HTTP/1.1" 200 37990 "" "Prometheus/2.51.0" 2026-03-09T18:50:41.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:41 vm00 bash[65531]: cluster 2026-03-09T18:50:40.136905+0000 mgr.y (mgr.44107) 487 : cluster [DBG] pgmap v262: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:41.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:41 vm00 bash[65531]: cluster 2026-03-09T18:50:40.136905+0000 mgr.y (mgr.44107) 487 : cluster [DBG] pgmap v262: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:41.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:41 vm00 bash[69512]: cluster 2026-03-09T18:50:40.136905+0000 mgr.y (mgr.44107) 487 : cluster [DBG] pgmap v262: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:41.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:41 vm00 bash[69512]: cluster 2026-03-09T18:50:40.136905+0000 mgr.y (mgr.44107) 487 : cluster [DBG] pgmap v262: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:41.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:41 vm08 bash[46122]: cluster 2026-03-09T18:50:40.136905+0000 mgr.y (mgr.44107) 487 : cluster [DBG] pgmap v262: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:41.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:41 vm08 bash[46122]: cluster 2026-03-09T18:50:40.136905+0000 mgr.y (mgr.44107) 487 : cluster [DBG] pgmap v262: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:43.543 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:43 vm00 bash[65531]: cluster 2026-03-09T18:50:42.137403+0000 mgr.y (mgr.44107) 488 : cluster [DBG] pgmap v263: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:43.543 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:43 vm00 bash[65531]: cluster 2026-03-09T18:50:42.137403+0000 mgr.y (mgr.44107) 488 : cluster [DBG] pgmap v263: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:43.543 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:43 vm00 bash[69512]: cluster 2026-03-09T18:50:42.137403+0000 mgr.y (mgr.44107) 488 : cluster [DBG] pgmap v263: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:43.543 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:43 vm00 bash[69512]: cluster 2026-03-09T18:50:42.137403+0000 mgr.y (mgr.44107) 488 : cluster [DBG] pgmap v263: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:43 vm08 bash[46122]: cluster 2026-03-09T18:50:42.137403+0000 mgr.y (mgr.44107) 488 : cluster [DBG] pgmap v263: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:43.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:43 vm08 bash[46122]: cluster 2026-03-09T18:50:42.137403+0000 mgr.y (mgr.44107) 488 : cluster [DBG] pgmap v263: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:43.799 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-09T18:50:44.219 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T18:50:44.219 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.a vm00 *:9093,9094 running (21m) 75s ago 28m 14.6M - 0.25.0 c8568f914cd2 2a8a29ecee54 2026-03-09T18:50:44.219 INFO:teuthology.orchestra.run.vm00.stdout:grafana.a vm08 *:3000 running (8m) 118s ago 27m 66.7M - 10.4.0 c8b91775d855 5e0e30d27ab2 2026-03-09T18:50:44.219 INFO:teuthology.orchestra.run.vm00.stdout:iscsi.foo.vm00.ywhulq vm00 running (81s) 75s ago 27m 76.3M - 3.9 654f31e6858e 8493aed3ce1d 2026-03-09T18:50:44.219 INFO:teuthology.orchestra.run.vm00.stdout:mgr.x vm08 *:8443,9283,8765 running (8m) 118s ago 30m 466M - 19.2.3-678-ge911bdeb 654f31e6858e e51f08afe84e 2026-03-09T18:50:44.219 INFO:teuthology.orchestra.run.vm00.stdout:mgr.y vm00 *:8443,9283,8765 running (18m) 75s ago 31m 539M - 19.2.3-678-ge911bdeb 654f31e6858e 838266ced7b2 2026-03-09T18:50:44.219 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (7m) 75s ago 31m 62.0M 2048M 19.2.3-678-ge911bdeb 654f31e6858e eb9fca83668a 2026-03-09T18:50:44.219 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm08 running (7m) 118s ago 30m 51.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a343d2673f4 2026-03-09T18:50:44.219 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm00 running (7m) 75s ago 30m 52.3M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 5c50cbcbef6b 2026-03-09T18:50:44.219 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.a vm00 *:9100 running (21m) 75s ago 28m 7984k - 1.7.0 72c9c2088986 c2e3e3202fde 2026-03-09T18:50:44.219 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.b vm08 *:9100 running (20m) 118s ago 28m 8267k - 1.7.0 72c9c2088986 7a7d1ed8c801 2026-03-09T18:50:44.219 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (4m) 75s ago 30m 53.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1334681baf1a 2026-03-09T18:50:44.219 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (4m) 75s ago 30m 53.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e b0cddb861a9d 2026-03-09T18:50:44.219 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (5m) 75s ago 29m 51.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9a838e294e64 2026-03-09T18:50:44.219 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (6m) 75s ago 29m 77.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 161fbb574888 2026-03-09T18:50:44.219 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm08 running (3m) 118s ago 29m 56.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7575a2bf51cd 2026-03-09T18:50:44.219 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm08 running (3m) 118s ago 29m 71.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9263a2afad40 2026-03-09T18:50:44.219 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm08 running (3m) 118s ago 28m 48.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e b5db37a03fe5 2026-03-09T18:50:44.219 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm08 running (2m) 118s ago 28m 69.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9904fad47d23 2026-03-09T18:50:44.219 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.a vm08 *:9095 running (8m) 118s ago 28m 47.5M - 2.51.0 1d3b7f56885b 2dc1789f7bf0 2026-03-09T18:50:44.219 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm00.ygjynr vm00 *:8000 running (2m) 75s ago 27m 91.4M - 19.2.3-678-ge911bdeb 654f31e6858e c812b26432aa 2026-03-09T18:50:44.219 INFO:teuthology.orchestra.run.vm00.stdout:rgw.foo.vm08.rcuedn vm08 *:8000 running (2m) 118s ago 27m 90.9M - 19.2.3-678-ge911bdeb 654f31e6858e a1f2a8ce96e5 2026-03-09T18:50:44.266 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions' 2026-03-09T18:50:44.721 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:50:44.721 INFO:teuthology.orchestra.run.vm00.stdout: "mon": { 2026-03-09T18:50:44.721 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-09T18:50:44.721 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:50:44.721 INFO:teuthology.orchestra.run.vm00.stdout: "mgr": { 2026-03-09T18:50:44.721 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:50:44.721 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:50:44.721 INFO:teuthology.orchestra.run.vm00.stdout: "osd": { 2026-03-09T18:50:44.721 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 8 2026-03-09T18:50:44.721 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:50:44.721 INFO:teuthology.orchestra.run.vm00.stdout: "rgw": { 2026-03-09T18:50:44.722 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T18:50:44.722 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-09T18:50:44.722 INFO:teuthology.orchestra.run.vm00.stdout: "overall": { 2026-03-09T18:50:44.722 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 15 2026-03-09T18:50:44.722 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-09T18:50:44.722 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:50:44.779 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade status' 2026-03-09T18:50:45.197 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:50:45.197 INFO:teuthology.orchestra.run.vm00.stdout: "target_image": null, 2026-03-09T18:50:45.197 INFO:teuthology.orchestra.run.vm00.stdout: "in_progress": false, 2026-03-09T18:50:45.197 INFO:teuthology.orchestra.run.vm00.stdout: "which": "", 2026-03-09T18:50:45.197 INFO:teuthology.orchestra.run.vm00.stdout: "services_complete": [], 2026-03-09T18:50:45.197 INFO:teuthology.orchestra.run.vm00.stdout: "progress": null, 2026-03-09T18:50:45.197 INFO:teuthology.orchestra.run.vm00.stdout: "message": "", 2026-03-09T18:50:45.197 INFO:teuthology.orchestra.run.vm00.stdout: "is_paused": false 2026-03-09T18:50:45.197 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:50:45.251 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph health detail' 2026-03-09T18:50:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:45 vm00 bash[65531]: audit 2026-03-09T18:50:43.547150+0000 mgr.y (mgr.44107) 489 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:45 vm00 bash[65531]: audit 2026-03-09T18:50:43.547150+0000 mgr.y (mgr.44107) 489 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:45 vm00 bash[65531]: cluster 2026-03-09T18:50:44.137709+0000 mgr.y (mgr.44107) 490 : cluster [DBG] pgmap v264: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:45 vm00 bash[65531]: cluster 2026-03-09T18:50:44.137709+0000 mgr.y (mgr.44107) 490 : cluster [DBG] pgmap v264: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:45 vm00 bash[65531]: audit 2026-03-09T18:50:44.218825+0000 mgr.y (mgr.44107) 491 : audit [DBG] from='client.54723 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:45 vm00 bash[65531]: audit 2026-03-09T18:50:44.218825+0000 mgr.y (mgr.44107) 491 : audit [DBG] from='client.54723 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:45 vm00 bash[65531]: audit 2026-03-09T18:50:44.724909+0000 mon.c (mon.1) 522 : audit [DBG] from='client.? 192.168.123.100:0/2623671848' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:50:45.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:45 vm00 bash[65531]: audit 2026-03-09T18:50:44.724909+0000 mon.c (mon.1) 522 : audit [DBG] from='client.? 192.168.123.100:0/2623671848' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:50:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:45 vm00 bash[69512]: audit 2026-03-09T18:50:43.547150+0000 mgr.y (mgr.44107) 489 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:45 vm00 bash[69512]: audit 2026-03-09T18:50:43.547150+0000 mgr.y (mgr.44107) 489 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:45 vm00 bash[69512]: cluster 2026-03-09T18:50:44.137709+0000 mgr.y (mgr.44107) 490 : cluster [DBG] pgmap v264: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:45 vm00 bash[69512]: cluster 2026-03-09T18:50:44.137709+0000 mgr.y (mgr.44107) 490 : cluster [DBG] pgmap v264: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:45 vm00 bash[69512]: audit 2026-03-09T18:50:44.218825+0000 mgr.y (mgr.44107) 491 : audit [DBG] from='client.54723 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:45 vm00 bash[69512]: audit 2026-03-09T18:50:44.218825+0000 mgr.y (mgr.44107) 491 : audit [DBG] from='client.54723 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:45 vm00 bash[69512]: audit 2026-03-09T18:50:44.724909+0000 mon.c (mon.1) 522 : audit [DBG] from='client.? 192.168.123.100:0/2623671848' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:50:45.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:45 vm00 bash[69512]: audit 2026-03-09T18:50:44.724909+0000 mon.c (mon.1) 522 : audit [DBG] from='client.? 192.168.123.100:0/2623671848' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:50:45.709 INFO:teuthology.orchestra.run.vm00.stdout:HEALTH_OK 2026-03-09T18:50:45.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:45 vm08 bash[46122]: audit 2026-03-09T18:50:43.547150+0000 mgr.y (mgr.44107) 489 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:45.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:45 vm08 bash[46122]: audit 2026-03-09T18:50:43.547150+0000 mgr.y (mgr.44107) 489 : audit [DBG] from='client.34556 -' entity='client.iscsi.foo.vm00.ywhulq' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T18:50:45.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:45 vm08 bash[46122]: cluster 2026-03-09T18:50:44.137709+0000 mgr.y (mgr.44107) 490 : cluster [DBG] pgmap v264: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:45.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:45 vm08 bash[46122]: cluster 2026-03-09T18:50:44.137709+0000 mgr.y (mgr.44107) 490 : cluster [DBG] pgmap v264: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:45.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:45 vm08 bash[46122]: audit 2026-03-09T18:50:44.218825+0000 mgr.y (mgr.44107) 491 : audit [DBG] from='client.54723 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:45.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:45 vm08 bash[46122]: audit 2026-03-09T18:50:44.218825+0000 mgr.y (mgr.44107) 491 : audit [DBG] from='client.54723 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:45.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:45 vm08 bash[46122]: audit 2026-03-09T18:50:44.724909+0000 mon.c (mon.1) 522 : audit [DBG] from='client.? 192.168.123.100:0/2623671848' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:50:45.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:45 vm08 bash[46122]: audit 2026-03-09T18:50:44.724909+0000 mon.c (mon.1) 522 : audit [DBG] from='client.? 192.168.123.100:0/2623671848' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:50:45.762 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.overall | length == 1'"'"'' 2026-03-09T18:50:46.241 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-09T18:50:46.280 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.overall | keys'"'"' | grep $sha1' 2026-03-09T18:50:46.512 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:46 vm00 bash[65531]: audit 2026-03-09T18:50:45.200510+0000 mgr.y (mgr.44107) 492 : audit [DBG] from='client.54735 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:46.512 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:46 vm00 bash[65531]: audit 2026-03-09T18:50:45.200510+0000 mgr.y (mgr.44107) 492 : audit [DBG] from='client.54735 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:46.512 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:46 vm00 bash[65531]: audit 2026-03-09T18:50:45.713192+0000 mon.a (mon.0) 725 : audit [DBG] from='client.? 192.168.123.100:0/22599016' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:50:46.512 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:46 vm00 bash[65531]: audit 2026-03-09T18:50:45.713192+0000 mon.a (mon.0) 725 : audit [DBG] from='client.? 192.168.123.100:0/22599016' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:50:46.512 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:46 vm00 bash[65531]: audit 2026-03-09T18:50:46.230543+0000 mon.b (mon.2) 27 : audit [DBG] from='client.? 192.168.123.100:0/2472824057' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:50:46.512 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:46 vm00 bash[65531]: audit 2026-03-09T18:50:46.230543+0000 mon.b (mon.2) 27 : audit [DBG] from='client.? 192.168.123.100:0/2472824057' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:50:46.513 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:46 vm00 bash[69512]: audit 2026-03-09T18:50:45.200510+0000 mgr.y (mgr.44107) 492 : audit [DBG] from='client.54735 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:46.513 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:46 vm00 bash[69512]: audit 2026-03-09T18:50:45.200510+0000 mgr.y (mgr.44107) 492 : audit [DBG] from='client.54735 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:46.513 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:46 vm00 bash[69512]: audit 2026-03-09T18:50:45.713192+0000 mon.a (mon.0) 725 : audit [DBG] from='client.? 192.168.123.100:0/22599016' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:50:46.513 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:46 vm00 bash[69512]: audit 2026-03-09T18:50:45.713192+0000 mon.a (mon.0) 725 : audit [DBG] from='client.? 192.168.123.100:0/22599016' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:50:46.513 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:46 vm00 bash[69512]: audit 2026-03-09T18:50:46.230543+0000 mon.b (mon.2) 27 : audit [DBG] from='client.? 192.168.123.100:0/2472824057' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:50:46.513 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:46 vm00 bash[69512]: audit 2026-03-09T18:50:46.230543+0000 mon.b (mon.2) 27 : audit [DBG] from='client.? 192.168.123.100:0/2472824057' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:50:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:46 vm08 bash[46122]: audit 2026-03-09T18:50:45.200510+0000 mgr.y (mgr.44107) 492 : audit [DBG] from='client.54735 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:46 vm08 bash[46122]: audit 2026-03-09T18:50:45.200510+0000 mgr.y (mgr.44107) 492 : audit [DBG] from='client.54735 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:46 vm08 bash[46122]: audit 2026-03-09T18:50:45.713192+0000 mon.a (mon.0) 725 : audit [DBG] from='client.? 192.168.123.100:0/22599016' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:50:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:46 vm08 bash[46122]: audit 2026-03-09T18:50:45.713192+0000 mon.a (mon.0) 725 : audit [DBG] from='client.? 192.168.123.100:0/22599016' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T18:50:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:46 vm08 bash[46122]: audit 2026-03-09T18:50:46.230543+0000 mon.b (mon.2) 27 : audit [DBG] from='client.? 192.168.123.100:0/2472824057' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:50:46.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:46 vm08 bash[46122]: audit 2026-03-09T18:50:46.230543+0000 mon.b (mon.2) 27 : audit [DBG] from='client.? 192.168.123.100:0/2472824057' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:50:46.768 INFO:teuthology.orchestra.run.vm00.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)" 2026-03-09T18:50:46.807 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ls | grep '"'"'^osd '"'"'' 2026-03-09T18:50:47.232 INFO:teuthology.orchestra.run.vm00.stdout:osd 8 2m ago - 2026-03-09T18:50:47.268 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-09T18:50:47.270 INFO:tasks.cephadm:Running commands on role mon.a host ubuntu@vm00.local 2026-03-09T18:50:47.270 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- bash -c 'ceph orch upgrade ls' 2026-03-09T18:50:47.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:47 vm00 bash[65531]: cluster 2026-03-09T18:50:46.138139+0000 mgr.y (mgr.44107) 493 : cluster [DBG] pgmap v265: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:47.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:47 vm00 bash[65531]: cluster 2026-03-09T18:50:46.138139+0000 mgr.y (mgr.44107) 493 : cluster [DBG] pgmap v265: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:47.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:47 vm00 bash[65531]: audit 2026-03-09T18:50:46.758620+0000 mon.a (mon.0) 726 : audit [DBG] from='client.? 192.168.123.100:0/1795495868' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:50:47.629 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:47 vm00 bash[65531]: audit 2026-03-09T18:50:46.758620+0000 mon.a (mon.0) 726 : audit [DBG] from='client.? 192.168.123.100:0/1795495868' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:50:47.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:47 vm00 bash[69512]: cluster 2026-03-09T18:50:46.138139+0000 mgr.y (mgr.44107) 493 : cluster [DBG] pgmap v265: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:47.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:47 vm00 bash[69512]: cluster 2026-03-09T18:50:46.138139+0000 mgr.y (mgr.44107) 493 : cluster [DBG] pgmap v265: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:47.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:47 vm00 bash[69512]: audit 2026-03-09T18:50:46.758620+0000 mon.a (mon.0) 726 : audit [DBG] from='client.? 192.168.123.100:0/1795495868' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:50:47.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:47 vm00 bash[69512]: audit 2026-03-09T18:50:46.758620+0000 mon.a (mon.0) 726 : audit [DBG] from='client.? 192.168.123.100:0/1795495868' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:50:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:47 vm08 bash[46122]: cluster 2026-03-09T18:50:46.138139+0000 mgr.y (mgr.44107) 493 : cluster [DBG] pgmap v265: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:47 vm08 bash[46122]: cluster 2026-03-09T18:50:46.138139+0000 mgr.y (mgr.44107) 493 : cluster [DBG] pgmap v265: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:47 vm08 bash[46122]: audit 2026-03-09T18:50:46.758620+0000 mon.a (mon.0) 726 : audit [DBG] from='client.? 192.168.123.100:0/1795495868' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:50:47.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:47 vm08 bash[46122]: audit 2026-03-09T18:50:46.758620+0000 mon.a (mon.0) 726 : audit [DBG] from='client.? 192.168.123.100:0/1795495868' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T18:50:48.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:48 vm00 bash[65531]: audit 2026-03-09T18:50:47.223288+0000 mgr.y (mgr.44107) 494 : audit [DBG] from='client.54759 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:48.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:48 vm00 bash[65531]: audit 2026-03-09T18:50:47.223288+0000 mgr.y (mgr.44107) 494 : audit [DBG] from='client.54759 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:48.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:48 vm00 bash[65531]: audit 2026-03-09T18:50:48.104399+0000 mon.c (mon.1) 523 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:50:48.628 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:48 vm00 bash[65531]: audit 2026-03-09T18:50:48.104399+0000 mon.c (mon.1) 523 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:50:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:48 vm00 bash[69512]: audit 2026-03-09T18:50:47.223288+0000 mgr.y (mgr.44107) 494 : audit [DBG] from='client.54759 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:48 vm00 bash[69512]: audit 2026-03-09T18:50:47.223288+0000 mgr.y (mgr.44107) 494 : audit [DBG] from='client.54759 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:48 vm00 bash[69512]: audit 2026-03-09T18:50:48.104399+0000 mon.c (mon.1) 523 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:50:48.629 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:48 vm00 bash[69512]: audit 2026-03-09T18:50:48.104399+0000 mon.c (mon.1) 523 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:50:48.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:48 vm08 bash[46122]: audit 2026-03-09T18:50:47.223288+0000 mgr.y (mgr.44107) 494 : audit [DBG] from='client.54759 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:48.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:48 vm08 bash[46122]: audit 2026-03-09T18:50:47.223288+0000 mgr.y (mgr.44107) 494 : audit [DBG] from='client.54759 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:48.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:48 vm08 bash[46122]: audit 2026-03-09T18:50:48.104399+0000 mon.c (mon.1) 523 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:50:48.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:48 vm08 bash[46122]: audit 2026-03-09T18:50:48.104399+0000 mon.c (mon.1) 523 : audit [DBG] from='mgr.44107 192.168.123.100:0/3740793199' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T18:50:49.096 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-09T18:50:49.096 INFO:teuthology.orchestra.run.vm00.stdout: "image": "quay.io/ceph/ceph", 2026-03-09T18:50:49.096 INFO:teuthology.orchestra.run.vm00.stdout: "registry": "quay.io", 2026-03-09T18:50:49.096 INFO:teuthology.orchestra.run.vm00.stdout: "bare_image": "ceph/ceph", 2026-03-09T18:50:49.096 INFO:teuthology.orchestra.run.vm00.stdout: "versions": [ 2026-03-09T18:50:49.096 INFO:teuthology.orchestra.run.vm00.stdout: "20.2.0", 2026-03-09T18:50:49.096 INFO:teuthology.orchestra.run.vm00.stdout: "20.1.1", 2026-03-09T18:50:49.096 INFO:teuthology.orchestra.run.vm00.stdout: "20.1.0", 2026-03-09T18:50:49.096 INFO:teuthology.orchestra.run.vm00.stdout: "19.2.3", 2026-03-09T18:50:49.096 INFO:teuthology.orchestra.run.vm00.stdout: "19.2.2", 2026-03-09T18:50:49.096 INFO:teuthology.orchestra.run.vm00.stdout: "19.2.1", 2026-03-09T18:50:49.096 INFO:teuthology.orchestra.run.vm00.stdout: "19.2.0" 2026-03-09T18:50:49.096 INFO:teuthology.orchestra.run.vm00.stdout: ] 2026-03-09T18:50:49.096 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-09T18:50:49.146 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- bash -c 'ceph orch upgrade ls --image quay.io/ceph/ceph --show-all-versions | grep 16.2.0' 2026-03-09T18:50:49.362 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:49 vm00 bash[65531]: audit 2026-03-09T18:50:47.686899+0000 mgr.y (mgr.44107) 495 : audit [DBG] from='client.34628 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:49.362 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:49 vm00 bash[65531]: audit 2026-03-09T18:50:47.686899+0000 mgr.y (mgr.44107) 495 : audit [DBG] from='client.34628 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:49.362 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:49 vm00 bash[65531]: cluster 2026-03-09T18:50:48.138572+0000 mgr.y (mgr.44107) 496 : cluster [DBG] pgmap v266: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:49.363 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:49 vm00 bash[65531]: cluster 2026-03-09T18:50:48.138572+0000 mgr.y (mgr.44107) 496 : cluster [DBG] pgmap v266: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:49.363 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:49 vm00 bash[69512]: audit 2026-03-09T18:50:47.686899+0000 mgr.y (mgr.44107) 495 : audit [DBG] from='client.34628 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:49.363 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:49 vm00 bash[69512]: audit 2026-03-09T18:50:47.686899+0000 mgr.y (mgr.44107) 495 : audit [DBG] from='client.34628 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:49.363 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:49 vm00 bash[69512]: cluster 2026-03-09T18:50:48.138572+0000 mgr.y (mgr.44107) 496 : cluster [DBG] pgmap v266: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:49.363 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:49 vm00 bash[69512]: cluster 2026-03-09T18:50:48.138572+0000 mgr.y (mgr.44107) 496 : cluster [DBG] pgmap v266: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:49.628 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:50:49 vm00 bash[53976]: ::ffff:192.168.123.108 - - [09/Mar/2026:18:50:49] "GET /metrics HTTP/1.1" 200 37988 "" "Prometheus/2.51.0" 2026-03-09T18:50:49.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:49 vm08 bash[46122]: audit 2026-03-09T18:50:47.686899+0000 mgr.y (mgr.44107) 495 : audit [DBG] from='client.34628 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:49.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:49 vm08 bash[46122]: audit 2026-03-09T18:50:47.686899+0000 mgr.y (mgr.44107) 495 : audit [DBG] from='client.34628 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:49.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:49 vm08 bash[46122]: cluster 2026-03-09T18:50:48.138572+0000 mgr.y (mgr.44107) 496 : cluster [DBG] pgmap v266: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:49.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:49 vm08 bash[46122]: cluster 2026-03-09T18:50:48.138572+0000 mgr.y (mgr.44107) 496 : cluster [DBG] pgmap v266: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:51.201 INFO:teuthology.orchestra.run.vm00.stdout: "16.2.0", 2026-03-09T18:50:51.243 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- bash -c 'ceph orch upgrade ls --image quay.io/ceph/ceph --tags | grep v16.2.2' 2026-03-09T18:50:51.476 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:51 vm00 bash[65531]: audit 2026-03-09T18:50:49.581101+0000 mgr.y (mgr.44107) 497 : audit [DBG] from='client.54771 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "show_all_versions": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:51.476 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:51 vm00 bash[65531]: audit 2026-03-09T18:50:49.581101+0000 mgr.y (mgr.44107) 497 : audit [DBG] from='client.54771 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "show_all_versions": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:51.476 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:51 vm00 bash[65531]: cluster 2026-03-09T18:50:50.138943+0000 mgr.y (mgr.44107) 498 : cluster [DBG] pgmap v267: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:51.476 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:51 vm00 bash[65531]: cluster 2026-03-09T18:50:50.138943+0000 mgr.y (mgr.44107) 498 : cluster [DBG] pgmap v267: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:51.476 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:51 vm00 bash[69512]: audit 2026-03-09T18:50:49.581101+0000 mgr.y (mgr.44107) 497 : audit [DBG] from='client.54771 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "show_all_versions": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:51.476 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:51 vm00 bash[69512]: audit 2026-03-09T18:50:49.581101+0000 mgr.y (mgr.44107) 497 : audit [DBG] from='client.54771 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "show_all_versions": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:51.476 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:51 vm00 bash[69512]: cluster 2026-03-09T18:50:50.138943+0000 mgr.y (mgr.44107) 498 : cluster [DBG] pgmap v267: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:51.476 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:51 vm00 bash[69512]: cluster 2026-03-09T18:50:50.138943+0000 mgr.y (mgr.44107) 498 : cluster [DBG] pgmap v267: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:51.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:51 vm08 bash[46122]: audit 2026-03-09T18:50:49.581101+0000 mgr.y (mgr.44107) 497 : audit [DBG] from='client.54771 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "show_all_versions": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:51.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:51 vm08 bash[46122]: audit 2026-03-09T18:50:49.581101+0000 mgr.y (mgr.44107) 497 : audit [DBG] from='client.54771 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "show_all_versions": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:51.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:51 vm08 bash[46122]: cluster 2026-03-09T18:50:50.138943+0000 mgr.y (mgr.44107) 498 : cluster [DBG] pgmap v267: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:51.724 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:51 vm08 bash[46122]: cluster 2026-03-09T18:50:50.138943+0000 mgr.y (mgr.44107) 498 : cluster [DBG] pgmap v267: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T18:50:53.075 INFO:teuthology.orchestra.run.vm00.stdout: "v16.2.2", 2026-03-09T18:50:53.075 INFO:teuthology.orchestra.run.vm00.stdout: "v16.2.2-20210505", 2026-03-09T18:50:53.134 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-09T18:50:53.136 INFO:tasks.cephadm:Teardown begin 2026-03-09T18:50:53.136 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T18:50:53.143 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T18:50:53.156 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-09T18:50:53.156 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 -- ceph mgr module disable cephadm 2026-03-09T18:50:53.340 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:53 vm00 bash[65531]: audit 2026-03-09T18:50:51.679045+0000 mgr.y (mgr.44107) 499 : audit [DBG] from='client.54777 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "tags": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:53.340 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:53 vm00 bash[65531]: audit 2026-03-09T18:50:51.679045+0000 mgr.y (mgr.44107) 499 : audit [DBG] from='client.54777 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "tags": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:53.340 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:53 vm00 bash[65531]: cluster 2026-03-09T18:50:52.139349+0000 mgr.y (mgr.44107) 500 : cluster [DBG] pgmap v268: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:53.340 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:53 vm00 bash[65531]: cluster 2026-03-09T18:50:52.139349+0000 mgr.y (mgr.44107) 500 : cluster [DBG] pgmap v268: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:53.340 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:53 vm00 bash[69512]: audit 2026-03-09T18:50:51.679045+0000 mgr.y (mgr.44107) 499 : audit [DBG] from='client.54777 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "tags": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:53.340 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:53 vm00 bash[69512]: audit 2026-03-09T18:50:51.679045+0000 mgr.y (mgr.44107) 499 : audit [DBG] from='client.54777 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "tags": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:53.340 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:53 vm00 bash[69512]: cluster 2026-03-09T18:50:52.139349+0000 mgr.y (mgr.44107) 500 : cluster [DBG] pgmap v268: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:53.340 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:53 vm00 bash[69512]: cluster 2026-03-09T18:50:52.139349+0000 mgr.y (mgr.44107) 500 : cluster [DBG] pgmap v268: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:53.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:53 vm08 bash[46122]: audit 2026-03-09T18:50:51.679045+0000 mgr.y (mgr.44107) 499 : audit [DBG] from='client.54777 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "tags": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:53.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:53 vm08 bash[46122]: audit 2026-03-09T18:50:51.679045+0000 mgr.y (mgr.44107) 499 : audit [DBG] from='client.54777 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "tags": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:50:53.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:53 vm08 bash[46122]: cluster 2026-03-09T18:50:52.139349+0000 mgr.y (mgr.44107) 500 : cluster [DBG] pgmap v268: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:53.474 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:53 vm08 bash[46122]: cluster 2026-03-09T18:50:52.139349+0000 mgr.y (mgr.44107) 500 : cluster [DBG] pgmap v268: 161 pgs: 161 active+clean; 457 KiB data, 289 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T18:50:53.477 INFO:teuthology.orchestra.run.vm00.stderr:Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',) 2026-03-09T18:50:53.519 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:50:53.520 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-09T18:50:53.520 DEBUG:teuthology.orchestra.run.vm00:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T18:50:53.523 DEBUG:teuthology.orchestra.run.vm08:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T18:50:53.525 INFO:tasks.cephadm:Stopping all daemons... 2026-03-09T18:50:53.525 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-09T18:50:53.525 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mon.a 2026-03-09T18:50:53.609 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:53 vm00 systemd[1]: Stopping Ceph mon.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:50:53.730 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mon.a.service' 2026-03-09T18:50:53.778 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:53 vm00 bash[69512]: debug 2026-03-09T18:50:53.608+0000 7fc8d8018640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T18:50:53.778 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:53 vm00 bash[69512]: debug 2026-03-09T18:50:53.608+0000 7fc8d8018640 -1 mon.a@0(leader) e4 *** Got Signal Terminated *** 2026-03-09T18:50:53.778 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:53 vm00 bash[112301]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-mon-a 2026-03-09T18:50:53.778 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:53 vm00 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mon.a.service: Deactivated successfully. 2026-03-09T18:50:53.778 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 09 18:50:53 vm00 systemd[1]: Stopped Ceph mon.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:50:53.785 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:50:53.785 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-09T18:50:53.785 INFO:tasks.cephadm.mon.b:Stopping mon.c... 2026-03-09T18:50:53.785 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mon.c 2026-03-09T18:50:53.878 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:53 vm00 systemd[1]: Stopping Ceph mon.c for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:50:54.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:53 vm00 bash[65531]: debug 2026-03-09T18:50:53.880+0000 7fdb856bb640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.c -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T18:50:54.041 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 09 18:50:53 vm00 bash[65531]: debug 2026-03-09T18:50:53.880+0000 7fdb856bb640 -1 mon.c@1(peon) e4 *** Got Signal Terminated *** 2026-03-09T18:50:54.043 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:50:53 vm00 bash[53976]: [09/Mar/2026:18:50:53] ENGINE Bus STOPPING 2026-03-09T18:50:54.106 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mon.c.service' 2026-03-09T18:50:54.117 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:50:54.117 INFO:tasks.cephadm.mon.b:Stopped mon.c 2026-03-09T18:50:54.117 INFO:tasks.cephadm.mon.b:Stopping mon.b... 2026-03-09T18:50:54.117 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mon.b 2026-03-09T18:50:54.150 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:50:54 vm00 bash[53976]: [09/Mar/2026:18:50:54] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T18:50:54.151 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:50:54 vm00 bash[53976]: [09/Mar/2026:18:50:54] ENGINE Bus STOPPED 2026-03-09T18:50:54.151 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:50:54 vm00 bash[53976]: [09/Mar/2026:18:50:54] ENGINE Bus STARTING 2026-03-09T18:50:54.151 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:50:54 vm00 bash[53976]: [09/Mar/2026:18:50:54] ENGINE Serving on http://:::9283 2026-03-09T18:50:54.151 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 09 18:50:54 vm00 bash[53976]: [09/Mar/2026:18:50:54] ENGINE Bus STARTED 2026-03-09T18:50:54.374 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:54 vm08 systemd[1]: Stopping Ceph mon.b for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:50:54.374 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:54 vm08 bash[46122]: debug 2026-03-09T18:50:54.175+0000 7fedb2505640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.b -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T18:50:54.374 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:54 vm08 bash[46122]: debug 2026-03-09T18:50:54.175+0000 7fedb2505640 -1 mon.b@2(peon) e4 *** Got Signal Terminated *** 2026-03-09T18:50:54.377 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 09 18:50:54 vm08 bash[76611]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-mon-b 2026-03-09T18:50:54.408 DEBUG:teuthology.orchestra.run.vm08:> sudo pkill -f 'journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mon.b.service' 2026-03-09T18:50:54.420 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:50:54.420 INFO:tasks.cephadm.mon.b:Stopped mon.b 2026-03-09T18:50:54.420 INFO:tasks.cephadm.mgr.y:Stopping mgr.y... 2026-03-09T18:50:54.420 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mgr.y 2026-03-09T18:50:54.586 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mgr.y.service' 2026-03-09T18:50:54.597 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:50:54.597 INFO:tasks.cephadm.mgr.y:Stopped mgr.y 2026-03-09T18:50:54.597 INFO:tasks.cephadm.mgr.x:Stopping mgr.x... 2026-03-09T18:50:54.597 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mgr.x 2026-03-09T18:50:54.663 INFO:journalctl@ceph.mgr.x.vm08.stdout:Mar 09 18:50:54 vm08 systemd[1]: Stopping Ceph mgr.x for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:50:54.726 DEBUG:teuthology.orchestra.run.vm08:> sudo pkill -f 'journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@mgr.x.service' 2026-03-09T18:50:54.736 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:50:54.737 INFO:tasks.cephadm.mgr.x:Stopped mgr.x 2026-03-09T18:50:54.737 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-09T18:50:54.737 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.0 2026-03-09T18:50:55.129 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:50:54 vm00 systemd[1]: Stopping Ceph osd.0 for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:50:55.129 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:50:54 vm00 bash[87898]: debug 2026-03-09T18:50:54.780+0000 7f80a0fd0640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T18:50:55.129 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:50:54 vm00 bash[87898]: debug 2026-03-09T18:50:54.780+0000 7f80a0fd0640 -1 osd.0 154 *** Got signal Terminated *** 2026-03-09T18:50:55.129 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:50:54 vm00 bash[87898]: debug 2026-03-09T18:50:54.780+0000 7f80a0fd0640 -1 osd.0 154 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:51:00.090 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 09 18:50:59 vm00 bash[112597]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-osd-0 2026-03-09T18:51:00.136 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.0.service' 2026-03-09T18:51:00.148 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:51:00.148 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-09T18:51:00.148 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-09T18:51:00.148 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.1 2026-03-09T18:51:00.378 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:51:00 vm00 systemd[1]: Stopping Ceph osd.1 for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:51:00.379 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:51:00 vm00 bash[94183]: debug 2026-03-09T18:51:00.240+0000 7f8c2b406640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T18:51:00.379 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:51:00 vm00 bash[94183]: debug 2026-03-09T18:51:00.240+0000 7f8c2b406640 -1 osd.1 154 *** Got signal Terminated *** 2026-03-09T18:51:00.379 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:51:00 vm00 bash[94183]: debug 2026-03-09T18:51:00.240+0000 7f8c2b406640 -1 osd.1 154 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:51:05.566 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 09 18:51:05 vm00 bash[112781]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-osd-1 2026-03-09T18:51:05.601 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.1.service' 2026-03-09T18:51:05.611 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:51:05.611 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-09T18:51:05.611 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-09T18:51:05.612 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.2 2026-03-09T18:51:05.878 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:51:05 vm00 systemd[1]: Stopping Ceph osd.2 for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:51:05.878 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:51:05 vm00 bash[81642]: debug 2026-03-09T18:51:05.696+0000 7ff2dfb31640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T18:51:05.878 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:51:05 vm00 bash[81642]: debug 2026-03-09T18:51:05.696+0000 7ff2dfb31640 -1 osd.2 154 *** Got signal Terminated *** 2026-03-09T18:51:05.878 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:51:05 vm00 bash[81642]: debug 2026-03-09T18:51:05.696+0000 7ff2dfb31640 -1 osd.2 154 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:51:11.011 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 09 18:51:10 vm00 bash[112965]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-osd-2 2026-03-09T18:51:11.037 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.2.service' 2026-03-09T18:51:11.047 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:51:11.047 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-09T18:51:11.047 INFO:tasks.cephadm.osd.3:Stopping osd.3... 2026-03-09T18:51:11.047 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.3 2026-03-09T18:51:11.379 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:51:11 vm00 systemd[1]: Stopping Ceph osd.3 for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:51:11.379 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:51:11 vm00 bash[76849]: debug 2026-03-09T18:51:11.132+0000 7f808789d640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.3 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T18:51:11.379 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:51:11 vm00 bash[76849]: debug 2026-03-09T18:51:11.132+0000 7f808789d640 -1 osd.3 154 *** Got signal Terminated *** 2026-03-09T18:51:11.379 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:51:11 vm00 bash[76849]: debug 2026-03-09T18:51:11.132+0000 7f808789d640 -1 osd.3 154 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:51:16.462 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 09 18:51:16 vm00 bash[113155]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-osd-3 2026-03-09T18:51:16.513 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.3.service' 2026-03-09T18:51:16.522 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:51:16.523 INFO:tasks.cephadm.osd.3:Stopped osd.3 2026-03-09T18:51:16.523 INFO:tasks.cephadm.osd.4:Stopping osd.4... 2026-03-09T18:51:16.523 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.4 2026-03-09T18:51:16.974 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:51:16 vm08 systemd[1]: Stopping Ceph osd.4 for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:51:16.974 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:51:16 vm08 bash[54020]: debug 2026-03-09T18:51:16.571+0000 7f62b3b1d640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.4 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T18:51:16.974 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:51:16 vm08 bash[54020]: debug 2026-03-09T18:51:16.571+0000 7f62b3b1d640 -1 osd.4 154 *** Got signal Terminated *** 2026-03-09T18:51:16.974 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:51:16 vm08 bash[54020]: debug 2026-03-09T18:51:16.571+0000 7f62b3b1d640 -1 osd.4 154 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:51:21.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:21 vm08 bash[68327]: debug 2026-03-09T18:51:21.111+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:56.446881+0000 front 2026-03-09T18:50:56.446987+0000 (oldest deadline 2026-03-09T18:51:20.546565+0000) 2026-03-09T18:51:21.474 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:51:21 vm08 bash[63503]: debug 2026-03-09T18:51:21.103+0000 7f561ab45640 -1 osd.6 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:58.536799+0000 front 2026-03-09T18:50:58.537047+0000 (oldest deadline 2026-03-09T18:51:20.236674+0000) 2026-03-09T18:51:21.946 INFO:journalctl@ceph.osd.4.vm08.stdout:Mar 09 18:51:21 vm08 bash[76789]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-osd-4 2026-03-09T18:51:21.993 DEBUG:teuthology.orchestra.run.vm08:> sudo pkill -f 'journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.4.service' 2026-03-09T18:51:22.003 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:51:22.004 INFO:tasks.cephadm.osd.4:Stopped osd.4 2026-03-09T18:51:22.004 INFO:tasks.cephadm.osd.5:Stopping osd.5... 2026-03-09T18:51:22.004 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.5 2026-03-09T18:51:22.224 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:51:22 vm08 bash[63503]: debug 2026-03-09T18:51:22.087+0000 7f561ab45640 -1 osd.6 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:58.536799+0000 front 2026-03-09T18:50:58.537047+0000 (oldest deadline 2026-03-09T18:51:20.236674+0000) 2026-03-09T18:51:22.224 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:22 vm08 bash[68327]: debug 2026-03-09T18:51:22.115+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:56.446881+0000 front 2026-03-09T18:50:56.446987+0000 (oldest deadline 2026-03-09T18:51:20.546565+0000) 2026-03-09T18:51:22.224 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:51:22 vm08 systemd[1]: Stopping Ceph osd.5 for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:51:22.224 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:51:22 vm08 bash[58822]: debug 2026-03-09T18:51:22.091+0000 7f906053c640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.5 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T18:51:22.224 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:51:22 vm08 bash[58822]: debug 2026-03-09T18:51:22.091+0000 7f906053c640 -1 osd.5 154 *** Got signal Terminated *** 2026-03-09T18:51:22.224 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:51:22 vm08 bash[58822]: debug 2026-03-09T18:51:22.091+0000 7f906053c640 -1 osd.5 154 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:51:23.474 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:51:23 vm08 bash[63503]: debug 2026-03-09T18:51:23.099+0000 7f561ab45640 -1 osd.6 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:58.536799+0000 front 2026-03-09T18:50:58.537047+0000 (oldest deadline 2026-03-09T18:51:20.236674+0000) 2026-03-09T18:51:23.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:23 vm08 bash[68327]: debug 2026-03-09T18:51:23.167+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:56.446881+0000 front 2026-03-09T18:50:56.446987+0000 (oldest deadline 2026-03-09T18:51:20.546565+0000) 2026-03-09T18:51:24.474 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:51:24 vm08 bash[63503]: debug 2026-03-09T18:51:24.099+0000 7f561ab45640 -1 osd.6 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:58.536799+0000 front 2026-03-09T18:50:58.537047+0000 (oldest deadline 2026-03-09T18:51:20.236674+0000) 2026-03-09T18:51:24.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:24 vm08 bash[68327]: debug 2026-03-09T18:51:24.131+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:56.446881+0000 front 2026-03-09T18:50:56.446987+0000 (oldest deadline 2026-03-09T18:51:20.546565+0000) 2026-03-09T18:51:25.474 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:51:25 vm08 bash[58822]: debug 2026-03-09T18:51:25.223+0000 7f905c354640 -1 osd.5 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:59.623088+0000 front 2026-03-09T18:50:59.623291+0000 (oldest deadline 2026-03-09T18:51:24.322618+0000) 2026-03-09T18:51:25.474 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:51:25 vm08 bash[63503]: debug 2026-03-09T18:51:25.071+0000 7f561ab45640 -1 osd.6 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:58.536799+0000 front 2026-03-09T18:50:58.537047+0000 (oldest deadline 2026-03-09T18:51:20.236674+0000) 2026-03-09T18:51:25.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:25 vm08 bash[68327]: debug 2026-03-09T18:51:25.135+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:56.446881+0000 front 2026-03-09T18:50:56.446987+0000 (oldest deadline 2026-03-09T18:51:20.546565+0000) 2026-03-09T18:51:26.474 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:51:26 vm08 bash[58822]: debug 2026-03-09T18:51:26.203+0000 7f905c354640 -1 osd.5 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:59.623088+0000 front 2026-03-09T18:50:59.623291+0000 (oldest deadline 2026-03-09T18:51:24.322618+0000) 2026-03-09T18:51:26.474 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:51:26 vm08 bash[63503]: debug 2026-03-09T18:51:26.059+0000 7f561ab45640 -1 osd.6 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:58.536799+0000 front 2026-03-09T18:50:58.537047+0000 (oldest deadline 2026-03-09T18:51:20.236674+0000) 2026-03-09T18:51:26.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:26 vm08 bash[68327]: debug 2026-03-09T18:51:26.171+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:56.446881+0000 front 2026-03-09T18:50:56.446987+0000 (oldest deadline 2026-03-09T18:51:20.546565+0000) 2026-03-09T18:51:27.278 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:27 vm08 bash[68327]: debug 2026-03-09T18:51:27.199+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:56.446881+0000 front 2026-03-09T18:50:56.446987+0000 (oldest deadline 2026-03-09T18:51:20.546565+0000) 2026-03-09T18:51:27.278 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:51:27 vm08 bash[63503]: debug 2026-03-09T18:51:27.019+0000 7f561ab45640 -1 osd.6 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:58.536799+0000 front 2026-03-09T18:50:58.537047+0000 (oldest deadline 2026-03-09T18:51:20.236674+0000) 2026-03-09T18:51:27.278 INFO:journalctl@ceph.osd.5.vm08.stdout:Mar 09 18:51:27 vm08 bash[76970]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-osd-5 2026-03-09T18:51:27.459 DEBUG:teuthology.orchestra.run.vm08:> sudo pkill -f 'journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.5.service' 2026-03-09T18:51:27.471 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:51:27.471 INFO:tasks.cephadm.osd.5:Stopped osd.5 2026-03-09T18:51:27.471 INFO:tasks.cephadm.osd.6:Stopping osd.6... 2026-03-09T18:51:27.471 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.6 2026-03-09T18:51:27.724 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:51:27 vm08 systemd[1]: Stopping Ceph osd.6 for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:51:27.724 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:51:27 vm08 bash[63503]: debug 2026-03-09T18:51:27.547+0000 7f561e52c640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.6 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T18:51:27.724 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:51:27 vm08 bash[63503]: debug 2026-03-09T18:51:27.547+0000 7f561e52c640 -1 osd.6 154 *** Got signal Terminated *** 2026-03-09T18:51:27.724 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:51:27 vm08 bash[63503]: debug 2026-03-09T18:51:27.547+0000 7f561e52c640 -1 osd.6 154 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:51:28.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:28 vm08 bash[68327]: debug 2026-03-09T18:51:28.155+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:56.446881+0000 front 2026-03-09T18:50:56.446987+0000 (oldest deadline 2026-03-09T18:51:20.546565+0000) 2026-03-09T18:51:28.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:28 vm08 bash[68327]: debug 2026-03-09T18:51:28.155+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-09T18:51:05.147651+0000 front 2026-03-09T18:51:05.147544+0000 (oldest deadline 2026-03-09T18:51:28.047296+0000) 2026-03-09T18:51:28.474 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:51:28 vm08 bash[63503]: debug 2026-03-09T18:51:28.055+0000 7f561ab45640 -1 osd.6 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:58.536799+0000 front 2026-03-09T18:50:58.537047+0000 (oldest deadline 2026-03-09T18:51:20.236674+0000) 2026-03-09T18:51:29.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:29 vm08 bash[68327]: debug 2026-03-09T18:51:29.203+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:56.446881+0000 front 2026-03-09T18:50:56.446987+0000 (oldest deadline 2026-03-09T18:51:20.546565+0000) 2026-03-09T18:51:29.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:29 vm08 bash[68327]: debug 2026-03-09T18:51:29.203+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-09T18:51:05.147651+0000 front 2026-03-09T18:51:05.147544+0000 (oldest deadline 2026-03-09T18:51:28.047296+0000) 2026-03-09T18:51:29.474 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:51:29 vm08 bash[63503]: debug 2026-03-09T18:51:29.095+0000 7f561ab45640 -1 osd.6 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:58.536799+0000 front 2026-03-09T18:50:58.537047+0000 (oldest deadline 2026-03-09T18:51:20.236674+0000) 2026-03-09T18:51:29.474 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:51:29 vm08 bash[63503]: debug 2026-03-09T18:51:29.095+0000 7f561ab45640 -1 osd.6 154 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-09T18:51:04.837626+0000 front 2026-03-09T18:51:04.837964+0000 (oldest deadline 2026-03-09T18:51:28.937454+0000) 2026-03-09T18:51:30.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:30 vm08 bash[68327]: debug 2026-03-09T18:51:30.219+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:56.446881+0000 front 2026-03-09T18:50:56.446987+0000 (oldest deadline 2026-03-09T18:51:20.546565+0000) 2026-03-09T18:51:30.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:30 vm08 bash[68327]: debug 2026-03-09T18:51:30.219+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-09T18:51:05.147651+0000 front 2026-03-09T18:51:05.147544+0000 (oldest deadline 2026-03-09T18:51:28.047296+0000) 2026-03-09T18:51:30.474 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:51:30 vm08 bash[63503]: debug 2026-03-09T18:51:30.075+0000 7f561ab45640 -1 osd.6 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:58.536799+0000 front 2026-03-09T18:50:58.537047+0000 (oldest deadline 2026-03-09T18:51:20.236674+0000) 2026-03-09T18:51:30.474 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:51:30 vm08 bash[63503]: debug 2026-03-09T18:51:30.075+0000 7f561ab45640 -1 osd.6 154 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-09T18:51:04.837626+0000 front 2026-03-09T18:51:04.837964+0000 (oldest deadline 2026-03-09T18:51:28.937454+0000) 2026-03-09T18:51:31.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:31 vm08 bash[68327]: debug 2026-03-09T18:51:31.191+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:56.446881+0000 front 2026-03-09T18:50:56.446987+0000 (oldest deadline 2026-03-09T18:51:20.546565+0000) 2026-03-09T18:51:31.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:31 vm08 bash[68327]: debug 2026-03-09T18:51:31.191+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-09T18:51:05.147651+0000 front 2026-03-09T18:51:05.147544+0000 (oldest deadline 2026-03-09T18:51:28.047296+0000) 2026-03-09T18:51:31.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:31 vm08 bash[68327]: debug 2026-03-09T18:51:31.191+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6822 osd.2 since back 2026-03-09T18:51:08.047763+0000 front 2026-03-09T18:51:08.047722+0000 (oldest deadline 2026-03-09T18:51:30.947554+0000) 2026-03-09T18:51:31.474 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:51:31 vm08 bash[63503]: debug 2026-03-09T18:51:31.031+0000 7f561ab45640 -1 osd.6 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:58.536799+0000 front 2026-03-09T18:50:58.537047+0000 (oldest deadline 2026-03-09T18:51:20.236674+0000) 2026-03-09T18:51:31.474 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:51:31 vm08 bash[63503]: debug 2026-03-09T18:51:31.031+0000 7f561ab45640 -1 osd.6 154 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-09T18:51:04.837626+0000 front 2026-03-09T18:51:04.837964+0000 (oldest deadline 2026-03-09T18:51:28.937454+0000) 2026-03-09T18:51:32.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:32 vm08 bash[68327]: debug 2026-03-09T18:51:32.167+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:56.446881+0000 front 2026-03-09T18:50:56.446987+0000 (oldest deadline 2026-03-09T18:51:20.546565+0000) 2026-03-09T18:51:32.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:32 vm08 bash[68327]: debug 2026-03-09T18:51:32.167+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-09T18:51:05.147651+0000 front 2026-03-09T18:51:05.147544+0000 (oldest deadline 2026-03-09T18:51:28.047296+0000) 2026-03-09T18:51:32.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:32 vm08 bash[68327]: debug 2026-03-09T18:51:32.167+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6822 osd.2 since back 2026-03-09T18:51:08.047763+0000 front 2026-03-09T18:51:08.047722+0000 (oldest deadline 2026-03-09T18:51:30.947554+0000) 2026-03-09T18:51:32.474 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:51:32 vm08 bash[63503]: debug 2026-03-09T18:51:32.051+0000 7f561ab45640 -1 osd.6 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:58.536799+0000 front 2026-03-09T18:50:58.537047+0000 (oldest deadline 2026-03-09T18:51:20.236674+0000) 2026-03-09T18:51:32.474 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:51:32 vm08 bash[63503]: debug 2026-03-09T18:51:32.051+0000 7f561ab45640 -1 osd.6 154 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-09T18:51:04.837626+0000 front 2026-03-09T18:51:04.837964+0000 (oldest deadline 2026-03-09T18:51:28.937454+0000) 2026-03-09T18:51:32.852 INFO:journalctl@ceph.osd.6.vm08.stdout:Mar 09 18:51:32 vm08 bash[77150]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-osd-6 2026-03-09T18:51:32.897 DEBUG:teuthology.orchestra.run.vm08:> sudo pkill -f 'journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.6.service' 2026-03-09T18:51:32.908 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:51:32.908 INFO:tasks.cephadm.osd.6:Stopped osd.6 2026-03-09T18:51:32.908 INFO:tasks.cephadm.osd.7:Stopping osd.7... 2026-03-09T18:51:32.908 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.7 2026-03-09T18:51:33.170 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:32 vm08 systemd[1]: Stopping Ceph osd.7 for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:51:33.170 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:32 vm08 bash[68327]: debug 2026-03-09T18:51:32.995+0000 7f8fdfe87640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.7 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T18:51:33.170 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:32 vm08 bash[68327]: debug 2026-03-09T18:51:32.995+0000 7f8fdfe87640 -1 osd.7 154 *** Got signal Terminated *** 2026-03-09T18:51:33.170 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:32 vm08 bash[68327]: debug 2026-03-09T18:51:32.995+0000 7f8fdfe87640 -1 osd.7 154 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:51:33.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:33 vm08 bash[68327]: debug 2026-03-09T18:51:33.167+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:56.446881+0000 front 2026-03-09T18:50:56.446987+0000 (oldest deadline 2026-03-09T18:51:20.546565+0000) 2026-03-09T18:51:33.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:33 vm08 bash[68327]: debug 2026-03-09T18:51:33.167+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-09T18:51:05.147651+0000 front 2026-03-09T18:51:05.147544+0000 (oldest deadline 2026-03-09T18:51:28.047296+0000) 2026-03-09T18:51:33.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:33 vm08 bash[68327]: debug 2026-03-09T18:51:33.167+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6822 osd.2 since back 2026-03-09T18:51:08.047763+0000 front 2026-03-09T18:51:08.047722+0000 (oldest deadline 2026-03-09T18:51:30.947554+0000) 2026-03-09T18:51:34.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:34 vm08 bash[68327]: debug 2026-03-09T18:51:34.135+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:56.446881+0000 front 2026-03-09T18:50:56.446987+0000 (oldest deadline 2026-03-09T18:51:20.546565+0000) 2026-03-09T18:51:34.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:34 vm08 bash[68327]: debug 2026-03-09T18:51:34.135+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-09T18:51:05.147651+0000 front 2026-03-09T18:51:05.147544+0000 (oldest deadline 2026-03-09T18:51:28.047296+0000) 2026-03-09T18:51:34.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:34 vm08 bash[68327]: debug 2026-03-09T18:51:34.135+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6822 osd.2 since back 2026-03-09T18:51:08.047763+0000 front 2026-03-09T18:51:08.047722+0000 (oldest deadline 2026-03-09T18:51:30.947554+0000) 2026-03-09T18:51:35.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:35 vm08 bash[68327]: debug 2026-03-09T18:51:35.107+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:56.446881+0000 front 2026-03-09T18:50:56.446987+0000 (oldest deadline 2026-03-09T18:51:20.546565+0000) 2026-03-09T18:51:35.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:35 vm08 bash[68327]: debug 2026-03-09T18:51:35.107+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-09T18:51:05.147651+0000 front 2026-03-09T18:51:05.147544+0000 (oldest deadline 2026-03-09T18:51:28.047296+0000) 2026-03-09T18:51:35.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:35 vm08 bash[68327]: debug 2026-03-09T18:51:35.107+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6822 osd.2 since back 2026-03-09T18:51:08.047763+0000 front 2026-03-09T18:51:08.047722+0000 (oldest deadline 2026-03-09T18:51:30.947554+0000) 2026-03-09T18:51:36.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:36 vm08 bash[68327]: debug 2026-03-09T18:51:36.119+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:56.446881+0000 front 2026-03-09T18:50:56.446987+0000 (oldest deadline 2026-03-09T18:51:20.546565+0000) 2026-03-09T18:51:36.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:36 vm08 bash[68327]: debug 2026-03-09T18:51:36.119+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-09T18:51:05.147651+0000 front 2026-03-09T18:51:05.147544+0000 (oldest deadline 2026-03-09T18:51:28.047296+0000) 2026-03-09T18:51:36.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:36 vm08 bash[68327]: debug 2026-03-09T18:51:36.119+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6822 osd.2 since back 2026-03-09T18:51:08.047763+0000 front 2026-03-09T18:51:08.047722+0000 (oldest deadline 2026-03-09T18:51:30.947554+0000) 2026-03-09T18:51:37.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:37 vm08 bash[68327]: debug 2026-03-09T18:51:37.099+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6806 osd.0 since back 2026-03-09T18:50:56.446881+0000 front 2026-03-09T18:50:56.446987+0000 (oldest deadline 2026-03-09T18:51:20.546565+0000) 2026-03-09T18:51:37.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:37 vm08 bash[68327]: debug 2026-03-09T18:51:37.099+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6814 osd.1 since back 2026-03-09T18:51:05.147651+0000 front 2026-03-09T18:51:05.147544+0000 (oldest deadline 2026-03-09T18:51:28.047296+0000) 2026-03-09T18:51:37.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:37 vm08 bash[68327]: debug 2026-03-09T18:51:37.099+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6822 osd.2 since back 2026-03-09T18:51:08.047763+0000 front 2026-03-09T18:51:08.047722+0000 (oldest deadline 2026-03-09T18:51:30.947554+0000) 2026-03-09T18:51:37.474 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:37 vm08 bash[68327]: debug 2026-03-09T18:51:37.099+0000 7f8fdbc9f640 -1 osd.7 154 heartbeat_check: no reply from 192.168.123.100:6830 osd.3 since back 2026-03-09T18:51:10.948156+0000 front 2026-03-09T18:51:10.947907+0000 (oldest deadline 2026-03-09T18:51:36.247906+0000) 2026-03-09T18:51:38.306 INFO:journalctl@ceph.osd.7.vm08.stdout:Mar 09 18:51:38 vm08 bash[77339]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-osd-7 2026-03-09T18:51:38.356 DEBUG:teuthology.orchestra.run.vm08:> sudo pkill -f 'journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@osd.7.service' 2026-03-09T18:51:38.366 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:51:38.366 INFO:tasks.cephadm.osd.7:Stopped osd.7 2026-03-09T18:51:38.366 INFO:tasks.cephadm.prometheus.a:Stopping prometheus.a... 2026-03-09T18:51:38.366 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@prometheus.a 2026-03-09T18:51:38.513 DEBUG:teuthology.orchestra.run.vm08:> sudo pkill -f 'journalctl -f -n 0 -u ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@prometheus.a.service' 2026-03-09T18:51:38.523 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:51:38.523 INFO:tasks.cephadm.prometheus.a:Stopped prometheus.a 2026-03-09T18:51:38.523 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 --force --keep-logs 2026-03-09T18:51:41.418 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:51:41 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:51:41.418 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:51:41 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:51:41.684 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:51:41 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:51:41.684 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:51:41 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:51:41.684 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:51:41 vm00 systemd[1]: Stopping Ceph alertmanager.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:51:41.684 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:51:41 vm00 bash[50953]: ts=2026-03-09T18:51:41.526Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..." 2026-03-09T18:51:41.684 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:51:41 vm00 bash[113426]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-alertmanager-a 2026-03-09T18:51:41.684 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:51:41 vm00 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@alertmanager.a.service: Deactivated successfully. 2026-03-09T18:51:41.684 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:51:41 vm00 systemd[1]: Stopped Ceph alertmanager.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:51:41.963 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:51:41 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:51:41.963 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:51:41 vm00 systemd[1]: Stopping Ceph node-exporter.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:51:41.963 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:51:41 vm00 bash[113543]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-node-exporter-a 2026-03-09T18:51:41.963 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:51:41 vm00 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@node-exporter.a.service: Main process exited, code=exited, status=143/n/a 2026-03-09T18:51:41.963 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:51:41 vm00 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@node-exporter.a.service: Failed with result 'exit-code'. 2026-03-09T18:51:41.963 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:51:41 vm00 systemd[1]: Stopped Ceph node-exporter.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:51:41.963 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 09 18:51:41 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:51:42.225 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 09 18:51:42 vm00 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:52:03.678 INFO:teuthology.orchestra.run.vm00.stderr:Traceback (most recent call last): 2026-03-09T18:52:03.678 INFO:teuthology.orchestra.run.vm00.stderr: File "/home/ubuntu/cephtest/cephadm", line 8634, in 2026-03-09T18:52:03.679 INFO:teuthology.orchestra.run.vm00.stderr: main() 2026-03-09T18:52:03.679 INFO:teuthology.orchestra.run.vm00.stderr: File "/home/ubuntu/cephtest/cephadm", line 8622, in main 2026-03-09T18:52:03.679 INFO:teuthology.orchestra.run.vm00.stderr: r = ctx.func(ctx) 2026-03-09T18:52:03.679 INFO:teuthology.orchestra.run.vm00.stderr: File "/home/ubuntu/cephtest/cephadm", line 6538, in command_rm_cluster 2026-03-09T18:52:03.680 INFO:teuthology.orchestra.run.vm00.stderr: with open(files[0]) as f: 2026-03-09T18:52:03.680 INFO:teuthology.orchestra.run.vm00.stderr:IsADirectoryError: [Errno 21] Is a directory: '/etc/ceph/ceph.conf' 2026-03-09T18:52:03.693 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:52:03.693 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 --force --keep-logs 2026-03-09T18:52:06.587 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:52:06 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:52:06.587 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:52:06 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:52:06.873 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:52:06 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:52:06.873 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:52:06 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:52:06.873 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:52:06 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:52:06.873 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:52:06 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:52:07.195 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:52:07 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:52:07.195 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:52:07 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:52:07.474 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:52:07 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:52:07.474 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:52:07 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:52:17.527 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:52:17 vm08 bash[44768]: logger=cleanup t=2026-03-09T18:52:17.272450702Z level=info msg="Completed cleanup jobs" duration=1.537098ms 2026-03-09T18:52:17.527 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:52:17 vm08 bash[44768]: logger=plugins.update.checker t=2026-03-09T18:52:17.440502919Z level=info msg="Update check succeeded" duration=53.795212ms 2026-03-09T18:52:17.834 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:52:17 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:52:17.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:52:17 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:52:17.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:52:17 vm08 systemd[1]: Stopping Ceph grafana.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:52:17.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:52:17 vm08 bash[44768]: logger=server t=2026-03-09T18:52:17.643583056Z level=info msg="Shutdown started" reason="System signal: terminated" 2026-03-09T18:52:17.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:52:17 vm08 bash[44768]: logger=grafana-apiserver t=2026-03-09T18:52:17.643727496Z level=info msg="StorageObjectCountTracker pruner is exiting" 2026-03-09T18:52:17.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:52:17 vm08 bash[44768]: logger=tracing t=2026-03-09T18:52:17.643747774Z level=info msg="Closing tracing" 2026-03-09T18:52:17.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:52:17 vm08 bash[44768]: logger=ticker t=2026-03-09T18:52:17.643896532Z level=info msg=stopped last_tick=2026-03-09T18:52:10Z 2026-03-09T18:52:17.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:52:17 vm08 bash[77923]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-grafana-a 2026-03-09T18:52:17.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:52:17 vm08 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@grafana.a.service: Deactivated successfully. 2026-03-09T18:52:17.834 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:52:17 vm08 systemd[1]: Stopped Ceph grafana.a for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:52:18.095 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:52:17 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:52:18.095 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:52:18 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:52:18.096 INFO:journalctl@ceph.grafana.a.vm08.stdout:Mar 09 18:52:17 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:52:18.378 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:52:18 vm08 systemd[1]: Stopping Ceph node-exporter.b for 614f4990-1be4-11f1-8b84-dfd1edd9d965... 2026-03-09T18:52:18.378 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:52:18 vm08 bash[78074]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965-node-exporter-b 2026-03-09T18:52:18.378 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:52:18 vm08 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@node-exporter.b.service: Main process exited, code=exited, status=143/n/a 2026-03-09T18:52:18.379 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:52:18 vm08 systemd[1]: ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@node-exporter.b.service: Failed with result 'exit-code'. 2026-03-09T18:52:18.379 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:52:18 vm08 systemd[1]: Stopped Ceph node-exporter.b for 614f4990-1be4-11f1-8b84-dfd1edd9d965. 2026-03-09T18:52:18.379 INFO:journalctl@ceph.node-exporter.b.vm08.stdout:Mar 09 18:52:18 vm08 systemd[1]: /etc/systemd/system/ceph-614f4990-1be4-11f1-8b84-dfd1edd9d965@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:52:18.814 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T18:52:18.821 INFO:teuthology.orchestra.run.vm00.stderr:rm: cannot remove '/etc/ceph/ceph.conf': Is a directory 2026-03-09T18:52:18.821 INFO:teuthology.orchestra.run.vm00.stderr:rm: cannot remove '/etc/ceph/ceph.client.admin.keyring': Is a directory 2026-03-09T18:52:18.821 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:52:18.821 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T18:52:18.828 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-09T18:52:18.828 DEBUG:teuthology.misc:Transferring archived files from vm00:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/602/remote/vm00/crash 2026-03-09T18:52:18.828 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/crash -- . 2026-03-09T18:52:18.872 INFO:teuthology.orchestra.run.vm00.stderr:tar: /var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/crash: Cannot open: No such file or directory 2026-03-09T18:52:18.872 INFO:teuthology.orchestra.run.vm00.stderr:tar: Error is not recoverable: exiting now 2026-03-09T18:52:18.872 DEBUG:teuthology.misc:Transferring archived files from vm08:/var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/602/remote/vm08/crash 2026-03-09T18:52:18.873 DEBUG:teuthology.orchestra.run.vm08:> sudo tar c -f - -C /var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/crash -- . 2026-03-09T18:52:18.880 INFO:teuthology.orchestra.run.vm08.stderr:tar: /var/lib/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/crash: Cannot open: No such file or directory 2026-03-09T18:52:18.880 INFO:teuthology.orchestra.run.vm08.stderr:tar: Error is not recoverable: exiting now 2026-03-09T18:52:18.880 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-09T18:52:18.880 DEBUG:teuthology.orchestra.run.vm00:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v CEPHADM_STRAY_DAEMON | egrep -v CEPHADM_FAILED_DAEMON | egrep -v CEPHADM_AGENT_DOWN | head -n 1 2026-03-09T18:52:18.926 INFO:tasks.cephadm:Compressing logs... 2026-03-09T18:52:18.927 DEBUG:teuthology.orchestra.run.vm00:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T18:52:18.969 DEBUG:teuthology.orchestra.run.vm08:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T18:52:18.976 INFO:teuthology.orchestra.run.vm00.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T18:52:18.977 INFO:teuthology.orchestra.run.vm08.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T18:52:18.977 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T18:52:18.978 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T18:52:18.978 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-mgr.x.log 2026-03-09T18:52:18.978 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-osd.3.log 2026-03-09T18:52:18.978 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph.log 2026-03-09T18:52:18.978 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph.log 2026-03-09T18:52:18.982 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/cephadm.log: /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-osd.3.log: gzip -5 --verbose -- /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-mon.c.log 2026-03-09T18:52:18.986 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph.log: 93.8% -- replaced with /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph.log.gz 2026-03-09T18:52:18.987 INFO:teuthology.orchestra.run.vm00.stderr: 91.2% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T18:52:18.987 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-osd.1.log 2026-03-09T18:52:18.987 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-client.rgw.foo.vm00.ygjynr.log 2026-03-09T18:52:18.987 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-mgr.x.log: gzip -5 --verbose -- /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-client.rgw.foo.vm08.rcuedn.log 2026-03-09T18:52:18.987 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-mon.c.log: /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-mgr.y.log 2026-03-09T18:52:18.989 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph.log: 88.8% -- replaced with /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph.log.gz 2026-03-09T18:52:18.990 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-mon.b.log 2026-03-09T18:52:18.991 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-client.rgw.foo.vm08.rcuedn.log: 75.3% -- replaced with /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-client.rgw.foo.vm08.rcuedn.log.gz 2026-03-09T18:52:18.997 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-osd.5.log 2026-03-09T18:52:18.997 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-client.rgw.foo.vm00.ygjynr.log: 75.6% -- replaced with /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-client.rgw.foo.vm00.ygjynr.log.gz 2026-03-09T18:52:18.997 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-mon.a.log 2026-03-09T18:52:19.008 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-mon.b.log: gzip -5 --verbose -- /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-osd.7.log 2026-03-09T18:52:19.008 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-osd.5.log: 90.6% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T18:52:19.009 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-mgr.y.log: gzip -5 --verbose -- /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-osd.2.log 2026-03-09T18:52:19.021 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-mon.a.log: gzip -5 --verbose -- /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph.audit.log 2026-03-09T18:52:19.023 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-osd.6.log 2026-03-09T18:52:19.031 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-osd.7.log: gzip -5 --verbose -- /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph.audit.log 2026-03-09T18:52:19.033 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-osd.2.log: gzip -5 --verbose -- /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-volume.log 2026-03-09T18:52:19.039 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-osd.6.log: gzip -5 --verbose -- /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-volume.log 2026-03-09T18:52:19.041 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph.audit.log: gzip -5 --verbose -- /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph.cephadm.log 2026-03-09T18:52:19.047 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph.audit.log: gzip -5 --verbose -- /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph.cephadm.log 2026-03-09T18:52:19.055 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-osd.4.log 2026-03-09T18:52:19.057 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-volume.log: 94.4%gzip -5 --verbose -- /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/tcmu-runner.log 2026-03-09T18:52:19.057 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph.cephadm.log: 91.2% -- replaced with /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph.audit.log.gz 2026-03-09T18:52:19.057 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph.cephadm.log: -- replaced with /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph.audit.log.gz 2026-03-09T18:52:19.058 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-osd.4.log: 85.6% -- replaced with /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph.cephadm.log.gz 2026-03-09T18:52:19.061 INFO:teuthology.orchestra.run.vm00.stderr: 91.8% -- replaced with /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph.cephadm.log.gz 2026-03-09T18:52:19.065 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-osd.0.log 2026-03-09T18:52:19.069 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/tcmu-runner.log: 85.2% -- replaced with /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/tcmu-runner.log.gz 2026-03-09T18:52:19.084 INFO:teuthology.orchestra.run.vm08.stderr: 90.4% 94.7% -- replaced with /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-volume.log.gz 2026-03-09T18:52:19.084 INFO:teuthology.orchestra.run.vm08.stderr: -- replaced with /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-mgr.x.log.gz 2026-03-09T18:52:19.085 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-osd.0.log: 94.8% -- replaced with /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-volume.log.gz 2026-03-09T18:52:19.645 INFO:teuthology.orchestra.run.vm00.stderr: 90.0% -- replaced with /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-mgr.y.log.gz 2026-03-09T18:52:19.725 INFO:teuthology.orchestra.run.vm08.stderr: 92.7% -- replaced with /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-mon.b.log.gz 2026-03-09T18:52:20.083 INFO:teuthology.orchestra.run.vm00.stderr: 92.6% -- replaced with /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-mon.c.log.gz 2026-03-09T18:52:20.739 INFO:teuthology.orchestra.run.vm08.stderr: 93.7% -- replaced with /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-osd.6.log.gz 2026-03-09T18:52:21.033 INFO:teuthology.orchestra.run.vm00.stderr: 93.5% -- replaced with /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-osd.2.log.gz 2026-03-09T18:52:21.073 INFO:teuthology.orchestra.run.vm08.stderr: 93.8% -- replaced with /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-osd.5.log.gz 2026-03-09T18:52:21.096 INFO:teuthology.orchestra.run.vm08.stderr: 94.2% -- replaced with /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-osd.7.log.gz 2026-03-09T18:52:21.136 INFO:teuthology.orchestra.run.vm00.stderr: 91.3% -- replaced with /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-mon.a.log.gz 2026-03-09T18:52:21.215 INFO:teuthology.orchestra.run.vm08.stderr: 94.0% -- replaced with /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-osd.4.log.gz 2026-03-09T18:52:21.217 INFO:teuthology.orchestra.run.vm08.stderr: 2026-03-09T18:52:21.217 INFO:teuthology.orchestra.run.vm08.stderr:real 0m2.244s 2026-03-09T18:52:21.217 INFO:teuthology.orchestra.run.vm08.stderr:user 0m4.113s 2026-03-09T18:52:21.217 INFO:teuthology.orchestra.run.vm08.stderr:sys 0m0.247s 2026-03-09T18:52:21.405 INFO:teuthology.orchestra.run.vm00.stderr: 93.8% -- replaced with /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-osd.0.log.gz 2026-03-09T18:52:21.432 INFO:teuthology.orchestra.run.vm00.stderr: 93.9% -- replaced with /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-osd.1.log.gz 2026-03-09T18:52:21.635 INFO:teuthology.orchestra.run.vm00.stderr: 93.9% -- replaced with /var/log/ceph/614f4990-1be4-11f1-8b84-dfd1edd9d965/ceph-osd.3.log.gz 2026-03-09T18:52:21.635 INFO:teuthology.orchestra.run.vm00.stderr: 2026-03-09T18:52:21.635 INFO:teuthology.orchestra.run.vm00.stderr:real 0m2.665s 2026-03-09T18:52:21.636 INFO:teuthology.orchestra.run.vm00.stderr:user 0m4.850s 2026-03-09T18:52:21.636 INFO:teuthology.orchestra.run.vm00.stderr:sys 0m0.265s 2026-03-09T18:52:21.636 INFO:tasks.cephadm:Archiving logs... 2026-03-09T18:52:21.636 DEBUG:teuthology.misc:Transferring archived files from vm00:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/602/remote/vm00/log 2026-03-09T18:52:21.636 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T18:52:21.933 DEBUG:teuthology.misc:Transferring archived files from vm08:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/602/remote/vm08/log 2026-03-09T18:52:21.933 DEBUG:teuthology.orchestra.run.vm08:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T18:52:22.142 INFO:tasks.cephadm:Removing cluster... 2026-03-09T18:52:22.142 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 --force 2026-03-09T18:52:22.769 INFO:teuthology.orchestra.run.vm00.stderr:Traceback (most recent call last): 2026-03-09T18:52:22.770 INFO:teuthology.orchestra.run.vm00.stderr: File "/home/ubuntu/cephtest/cephadm", line 8634, in 2026-03-09T18:52:22.770 INFO:teuthology.orchestra.run.vm00.stderr: main() 2026-03-09T18:52:22.770 INFO:teuthology.orchestra.run.vm00.stderr: File "/home/ubuntu/cephtest/cephadm", line 8622, in main 2026-03-09T18:52:22.770 INFO:teuthology.orchestra.run.vm00.stderr: r = ctx.func(ctx) 2026-03-09T18:52:22.770 INFO:teuthology.orchestra.run.vm00.stderr: File "/home/ubuntu/cephtest/cephadm", line 6538, in command_rm_cluster 2026-03-09T18:52:22.771 INFO:teuthology.orchestra.run.vm00.stderr: with open(files[0]) as f: 2026-03-09T18:52:22.771 INFO:teuthology.orchestra.run.vm00.stderr:IsADirectoryError: [Errno 21] Is a directory: '/etc/ceph/ceph.conf' 2026-03-09T18:52:22.783 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:52:22.783 INFO:tasks.cephadm:Teardown complete 2026-03-09T18:52:22.783 ERROR:teuthology.run_tasks:Manager failed: cephadm Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cephadm.py", line 2216, in task with contextutil.nested( File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cephadm.py", line 1845, in initialize_config yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cephadm.py", line 229, in download_cephadm _rm_cluster(ctx, cluster_name) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cephadm.py", line 383, in _rm_cluster remote.run(args=[ File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 --force' 2026-03-09T18:52:22.784 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-09T18:52:22.786 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-09T18:52:22.786 DEBUG:teuthology.orchestra.run.vm00:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T18:52:22.787 DEBUG:teuthology.orchestra.run.vm08:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T18:52:22.821 INFO:teuthology.orchestra.run.vm00.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T18:52:22.821 INFO:teuthology.orchestra.run.vm00.stdout:============================================================================== 2026-03-09T18:52:22.821 INFO:teuthology.orchestra.run.vm00.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:52:22.821 INFO:teuthology.orchestra.run.vm00.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:52:22.821 INFO:teuthology.orchestra.run.vm00.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:52:22.821 INFO:teuthology.orchestra.run.vm00.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:52:22.821 INFO:teuthology.orchestra.run.vm00.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:52:22.821 INFO:teuthology.orchestra.run.vm00.stdout:-netcup01.therav 171.237.1.87 2 u 27 128 377 28.355 -5.640 0.401 2026-03-09T18:52:22.821 INFO:teuthology.orchestra.run.vm00.stdout:-141.84.43.73 40.33.41.76 2 u 79 128 377 31.593 -5.575 2.225 2026-03-09T18:52:22.821 INFO:teuthology.orchestra.run.vm00.stdout:-sv1.ggsrv.de 192.53.103.103 2 u 87 128 377 24.954 -3.263 0.856 2026-03-09T18:52:22.821 INFO:teuthology.orchestra.run.vm00.stdout:-static.81.54.25 131.188.3.222 2 u 83 128 377 25.155 -3.449 0.546 2026-03-09T18:52:22.821 INFO:teuthology.orchestra.run.vm00.stdout:+node-4.infogral 168.239.11.197 2 u 86 128 377 23.523 -3.103 0.587 2026-03-09T18:52:22.821 INFO:teuthology.orchestra.run.vm00.stdout:-mail.light-spee 124.216.164.14 2 u 88 128 377 28.937 -2.955 0.768 2026-03-09T18:52:22.821 INFO:teuthology.orchestra.run.vm00.stdout:*static.222.16.4 35.73.197.144 2 u 9 128 377 0.410 -3.195 0.596 2026-03-09T18:52:22.821 INFO:teuthology.orchestra.run.vm00.stdout:-home.of.the.smi .LIgp. 1 u 87 128 377 38.396 -1.425 1.186 2026-03-09T18:52:22.821 INFO:teuthology.orchestra.run.vm00.stdout:+time.cloudflare 10.165.8.4 3 u 7 128 377 20.441 -2.299 0.500 2026-03-09T18:52:22.989 INFO:teuthology.orchestra.run.vm08.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T18:52:22.989 INFO:teuthology.orchestra.run.vm08.stdout:============================================================================== 2026-03-09T18:52:22.989 INFO:teuthology.orchestra.run.vm08.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:52:22.989 INFO:teuthology.orchestra.run.vm08.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:52:22.989 INFO:teuthology.orchestra.run.vm08.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:52:22.989 INFO:teuthology.orchestra.run.vm08.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:52:22.989 INFO:teuthology.orchestra.run.vm08.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:52:22.989 INFO:teuthology.orchestra.run.vm08.stdout:-141.84.43.75 189.97.54.122 2 u 80 128 377 34.898 -1.588 0.333 2026-03-09T18:52:22.989 INFO:teuthology.orchestra.run.vm08.stdout:-sv1.ggsrv.de 192.53.103.103 2 u 62 64 377 24.968 -0.396 0.065 2026-03-09T18:52:22.989 INFO:teuthology.orchestra.run.vm08.stdout:+mail.light-spee 124.216.164.14 2 u 1 64 377 28.914 +0.045 0.065 2026-03-09T18:52:22.989 INFO:teuthology.orchestra.run.vm08.stdout:*static.222.16.4 35.73.197.144 2 u - 64 377 0.326 -0.210 0.086 2026-03-09T18:52:22.989 INFO:teuthology.orchestra.run.vm08.stdout:+node-4.infogral 168.239.11.197 2 u 59 64 377 23.534 -0.086 0.038 2026-03-09T18:52:22.989 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-09T18:52:22.991 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-09T18:52:22.992 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-09T18:52:22.994 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-09T18:52:22.995 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-09T18:52:22.997 INFO:teuthology.task.internal:Duration was 2223.374417 seconds 2026-03-09T18:52:22.997 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-09T18:52:22.999 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-09T18:52:22.999 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T18:52:23.000 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T18:52:23.028 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-09T18:52:23.028 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm00.local 2026-03-09T18:52:23.028 DEBUG:teuthology.orchestra.run.vm00:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T18:52:23.079 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm08.local 2026-03-09T18:52:23.079 DEBUG:teuthology.orchestra.run.vm08:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T18:52:23.089 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-09T18:52:23.089 DEBUG:teuthology.orchestra.run.vm00:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T18:52:23.122 DEBUG:teuthology.orchestra.run.vm08:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T18:52:23.250 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-09T18:52:23.250 DEBUG:teuthology.orchestra.run.vm00:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T18:52:23.251 DEBUG:teuthology.orchestra.run.vm08:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T18:52:23.256 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T18:52:23.257 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T18:52:23.257 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T18:52:23.257 INFO:teuthology.orchestra.run.vm00.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0%/home/ubuntu/cephtest/archive/syslog/kern.log: -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T18:52:23.257 INFO:teuthology.orchestra.run.vm00.stderr: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T18:52:23.257 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T18:52:23.258 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T18:52:23.258 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T18:52:23.258 INFO:teuthology.orchestra.run.vm08.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T18:52:23.258 INFO:teuthology.orchestra.run.vm08.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T18:52:23.277 INFO:teuthology.orchestra.run.vm08.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 90.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T18:52:23.284 INFO:teuthology.orchestra.run.vm00.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 91.8% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T18:52:23.286 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-09T18:52:23.288 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-09T18:52:23.288 DEBUG:teuthology.orchestra.run.vm00:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T18:52:23.337 DEBUG:teuthology.orchestra.run.vm08:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T18:52:23.345 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-09T18:52:23.347 DEBUG:teuthology.orchestra.run.vm00:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T18:52:23.381 DEBUG:teuthology.orchestra.run.vm08:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T18:52:23.387 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern = core 2026-03-09T18:52:23.393 INFO:teuthology.orchestra.run.vm08.stdout:kernel.core_pattern = core 2026-03-09T18:52:23.401 DEBUG:teuthology.orchestra.run.vm00:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T18:52:23.439 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:52:23.439 DEBUG:teuthology.orchestra.run.vm08:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T18:52:23.446 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:52:23.446 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-09T18:52:23.448 INFO:teuthology.task.internal:Transferring archived files... 2026-03-09T18:52:23.449 DEBUG:teuthology.misc:Transferring archived files from vm00:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/602/remote/vm00 2026-03-09T18:52:23.449 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T18:52:23.490 DEBUG:teuthology.misc:Transferring archived files from vm08:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/602/remote/vm08 2026-03-09T18:52:23.490 DEBUG:teuthology.orchestra.run.vm08:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T18:52:23.498 INFO:teuthology.task.internal:Removing archive directory... 2026-03-09T18:52:23.498 DEBUG:teuthology.orchestra.run.vm00:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T18:52:23.534 DEBUG:teuthology.orchestra.run.vm08:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T18:52:23.542 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-09T18:52:23.545 INFO:teuthology.task.internal:Not uploading archives. 2026-03-09T18:52:23.545 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-09T18:52:23.547 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-09T18:52:23.547 DEBUG:teuthology.orchestra.run.vm00:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T18:52:23.578 DEBUG:teuthology.orchestra.run.vm08:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T18:52:23.580 INFO:teuthology.orchestra.run.vm00.stdout: 258076 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 9 18:52 /home/ubuntu/cephtest 2026-03-09T18:52:23.581 INFO:teuthology.orchestra.run.vm00.stdout: 258199 316 -rwxrwxr-x 1 ubuntu ubuntu 320521 Mar 9 18:18 /home/ubuntu/cephtest/cephadm 2026-03-09T18:52:23.581 INFO:teuthology.orchestra.run.vm00.stderr:rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty 2026-03-09T18:52:23.584 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:52:23.584 ERROR:teuthology.run_tasks:Manager failed: internal.base Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/task/internal/__init__.py", line 48, in base yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cephadm.py", line 2216, in task with contextutil.nested( File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cephadm.py", line 1845, in initialize_config yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cephadm.py", line 229, in download_cephadm _rm_cluster(ctx, cluster_name) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cephadm.py", line 383, in _rm_cluster remote.run(args=[ File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 --force' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/teuthology/teuthology/task/internal/__init__.py", line 53, in base run.wait( File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 2026-03-09T18:52:23.584 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-09T18:52:23.587 DEBUG:teuthology.run_tasks:Exception was not quenched, exiting: CommandFailedError: Command failed on vm00 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 --force' 2026-03-09T18:52:23.587 INFO:teuthology.run:Summary data: description: orch/cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/connectivity} duration: 2223.374416589737 failure_reason: 'Command failed on vm00 with status 1: ''sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 614f4990-1be4-11f1-8b84-dfd1edd9d965 --force''' owner: kyr status: fail success: false 2026-03-09T18:52:23.588 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T18:52:23.589 INFO:teuthology.orchestra.run.vm08.stdout: 258069 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 9 18:52 /home/ubuntu/cephtest 2026-03-09T18:52:23.589 INFO:teuthology.orchestra.run.vm08.stdout: 258199 316 -rwxrwxr-x 1 ubuntu ubuntu 320521 Mar 9 18:18 /home/ubuntu/cephtest/cephadm 2026-03-09T18:52:23.589 INFO:teuthology.orchestra.run.vm08.stderr:rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty 2026-03-09T18:52:23.608 INFO:teuthology.run:FAIL